Why Embedded Systems Complexity Is Driving a Requirements Management Crisis in Industrial Automation

PLCs, edge AI, and collaborative robots have crossed a software complexity threshold that hardware-centric organizations were never built to manage.


The Complexity Threshold Nobody Planned For

A decade ago, the software in a typical PLC application ran a few hundred rungs of ladder logic managing discrete I/O. The requirements were a printed spec sheet. Traceability meant a wiring diagram that matched the electrical panel. Compliance was a CE mark and a safety relay.

That world is gone.

A modern industrial automation deployment — even a mid-tier one — might include a PLC running IEC 61131-3 structured text across 50,000+ lines, an edge AI inference engine doing real-time defect detection at 200ms cycle time, a collaborative robot with 12 safety-monitored axes, and an IIoT gateway pushing telemetry to a cloud historian while simultaneously feeding a local digital twin. These systems talk to each other. They share state. They fail in non-obvious combinations.

The software complexity is no longer incidental to the product. It is the product. And the organizations that build and maintain these systems frequently lack the process infrastructure to manage that complexity with any rigor.

This is the requirements management crisis in industrial automation: not a shortage of documentation, but a structural mismatch between the engineering discipline the industry grew up with and the discipline the industry now requires.


Where Industrial Automation Companies Come From

To understand the crisis, you have to understand the organizational DNA. The dominant players in industrial automation — Siemens, Rockwell Automation, ABB, Mitsubishi Electric, Omron — grew up as mechanical and electrical engineering companies. Their customers were plant engineers and electricians. Their products were physical: motors, drives, sensors, panels.

Software was a configuration layer. Requirements were captured in customer specifications that described physical behavior: “Motor 3 shall ramp to 1450 RPM within 2 seconds of start command.” The software that implemented that behavior was written by controls engineers, often informally, often on-site, often modified in the field without documentation. Version control was a backup of the PLC project on a USB drive in a cabinet.

This worked — until the software became too complex for informal management to contain.

The inflection point came in waves. First, safety-critical embedded software in machinery began triggering IEC 62443 and IEC 61508 scrutiny from certification bodies and insurance underwriters. Then edge AI created systems whose behavior couldn’t be fully specified in advance — a defect detection model trained on one production batch behaves differently on the next. Then collaborative robots introduced probabilistic safety envelopes that required formal hazard analysis tied directly to software functions. Each wave added complexity. None of them came with a corresponding upgrade to how requirements were managed.


The Traceability Gap: Where Compliance Breaks Down

The specific failure mode that surfaces most often in audits and incident investigations is traceability.

Traceability, in this context, means the ability to follow a chain of evidence from a customer’s functional requirement — “the cobot shall stop within 100ms of detecting a human in Zone 2” — down through system architecture, software module design, implementation, test cases, and test results. Every link in that chain needs to exist, be current, and be auditable.

In practice, most industrial automation companies have the first link and the last link. They have a customer specification (often a Word document or a PDF) and they have test results (often a spreadsheet). The middle is a gap. The software architecture that translates customer intent into implemented behavior exists, if at all, in the heads of engineers and in comments inside PLC project files.

This is not a corner case. A 2024 survey by the International Society of Automation found that fewer than 30% of machine builders had formal traceability between customer functional requirements and software-level test cases. Among companies with fewer than 500 engineers, the number was under 15%.

The consequences are concrete:

Compliance failures. IEC 62304, which governs software lifecycle processes for medical devices but is increasingly referenced in industrial safety contexts, requires that software requirements be traceable to system requirements, and that software tests verify software requirements. Without automated traceability, every audit becomes a manual forensics exercise that often reveals gaps too late to remediate without delaying certification.

Change management failures. When a customer changes a functional requirement mid-project — which happens constantly in industrial automation — there is no reliable mechanism to identify which software modules, test cases, and documentation artifacts need to change. Engineers make the obvious change and miss the non-obvious downstream effects. Field failures follow.

Incident investigation failures. When a machine behaves unexpectedly in the field, the question “what requirement was this behavior implementing, and was it correctly verified?” frequently cannot be answered. Root cause analysis stalls. Liability exposure grows.


IEC 62304 and IEC 61508: The Compliance Pressure Is Real

IEC 61508 — the functional safety standard for electrical, electronic, and programmable electronic safety-related systems — has been nominally on the books since 1998. But enforcement has been inconsistent, particularly for non-safety-classified software. The industry got comfortable with selective compliance.

That comfort is eroding. Three forces are converging.

Insurance and OEM pressure. Large OEMs and tier-1 industrial customers are now contractually requiring IEC 61508 SIL 2 or SIL 3 evidence for embedded software in safety-rated assemblies. The requirement isn’t just “show us your SIL certificate.” It’s “show us your software lifecycle documentation, your requirements traceability matrix, your hazard analysis, and your test evidence.” Suppliers who can’t produce that documentation are being replaced.

Edge AI and the limits of static standards. IEC 61508 was written for deterministic software. Edge AI inference engines are not deterministic in the relevant sense: their output is probabilistic, their behavior changes when the model is updated, and their failure modes are not captured by traditional fault tree analysis. Regulators and certification bodies are now working through how to apply functional safety principles to AI components in safety-relevant functions. The interim expectation — from TÜV SÜD and others — is rigorous requirements traceability and change management as a compensating control. You cannot claim safety of an AI component without demonstrating that every change to the model is evaluated against documented functional and safety requirements.

IEC 62304 cross-pollination. Industrial automation companies that serve the medical device or pharmaceutical automation markets are encountering IEC 62304 directly. The standard requires a full software development lifecycle with defined processes for requirements, architecture, detailed design, implementation, verification, and maintenance. Companies that previously separated their “medical” and “industrial” engineering teams are finding that separation untenable. The software development discipline required for FDA-regulated automation is starting to become the expected standard for all safety-relevant industrial software.


The Cobot and Edge AI Inflection Point

Collaborative robots and edge AI deserve specific attention because they represent a qualitative, not just quantitative, increase in requirements management difficulty.

Traditional industrial robots operate in fenced cells. The safety requirement is simple: keep humans out. The software that implements this is deterministic and verifiable. The requirements are static.

Collaborative robots operate in shared human-machine workspaces. The safety requirement is dynamic: detect human presence, classify proximity zones, adjust speed and force limits in real time, stop if a threshold is crossed. The software that implements this involves sensor fusion, state estimation, and — increasingly — learned models. The requirement “the cobot shall stop within 100ms of detecting a human in Zone 2” decomposes into dozens of lower-level software requirements, each of which needs to be verified under a range of operating conditions.

Edge AI compounds this. A vision-based zone detection system trained on the installation environment performs differently in a different lighting condition, with a different camera angle, or with a new human operator wearing unfamiliar PPE. The functional requirement hasn’t changed. But the software behavior has changed because the model weights changed. If you don’t have a requirements management process that captures the relationship between model training, model validation, and the functional requirements the model is meant to satisfy, you have a compliance gap and potentially a safety gap.

This is the requirements management problem in its sharpest form: not a failure to write requirements, but a failure to maintain the living connection between requirements and implemented behavior as systems evolve.


What Leading Industrial Companies Are Actually Doing

The industrial automation companies that are ahead of this problem share a few common characteristics. They are not solving it with more documentation. They are solving it with architecture.

Moving from documents to models. The most significant shift is from document-based requirements management — Word specs, PDF drawings, Excel RTMs — to model-based or graph-based approaches where requirements, architectural elements, test cases, and traceability links are first-class objects in a connected system. This matters because a graph-based model can be queried: “show me all software modules affected by Requirement 47” returns an answer in seconds rather than a manual search across document versions.

Automating traceability, not delegating it. The manual requirements traceability matrix — the engineering team’s Excel file that someone updates quarterly — is the most common single point of failure in requirements management. Leading companies are replacing it with automated traceability engines that maintain links as artifacts change, flag broken links when requirements are modified, and generate compliance reports on demand. The goal is continuous traceability, not periodic documentation.

Integrating requirements with development toolchains. Requirements that live in a separate tool from the engineering work — disconnected from the PLC IDE, the CI/CD pipeline, the model training workflow — will not be maintained. Requirements traceability works only when it is embedded in the daily workflow of controls engineers, software developers, and safety engineers. Companies achieving this are integrating their requirements platform with tools like CODESYS, TIA Portal, and their MLOps pipelines.

Applying AI to requirements quality. Several industrial automation companies are now using AI-assisted requirements analysis to identify ambiguous, incomplete, or conflicting requirements before they enter development. This is not a solved problem, but early adopters are finding that automated analysis of requirement text — checking for passive voice constructions that obscure responsibility, missing verification criteria, or scope overlap with adjacent requirements — catches the class of defect that typically surfaces as a late-stage compliance failure.

Tools like Flow Engineering are positioned squarely in this direction — built for hardware and systems engineering teams, with graph-based requirements models, AI-assisted traceability, and an architecture that treats the connection between customer requirements and engineering artifacts as a living, queryable structure rather than a document. For industrial automation teams trying to close the traceability gap without adding a documentation burden on top of their engineering work, this represents a materially different approach from legacy requirements tools that were designed for aerospace and defense organizations with dedicated systems engineering departments.


The Honest Assessment

Industrial automation companies are not failing at requirements management because they lack discipline. They are failing because the software complexity of their products has outpaced the organizational and tooling infrastructure they inherited from a hardware-centric past. The gap is structural.

Closing it requires three things that are individually straightforward and collectively difficult: a requirements process that engineers will actually use, tooling that integrates into existing development workflows rather than adding parallel documentation overhead, and organizational leadership that recognizes requirements traceability as a product quality and liability issue, not a compliance checkbox.

The companies making the most progress are the ones that stopped treating this as a documentation problem. The requirement exists whether you capture it or not. The question is whether you can prove, under audit, under incident investigation, or under a product liability claim, that you knew what behavior your software was supposed to implement, that you verified it, and that you can show exactly what changed and why when something goes wrong.

That is not a documentation standard. That is an engineering standard. And the industrial automation industry is, slowly and under pressure, learning the difference.