What Is Functional Safety?

Functional safety is the part of overall safety that depends on a system or equipment operating correctly in response to its inputs, including the input of a failure. That second clause — including the input of a failure — is the whole game. A pressure relief valve that works perfectly under normal conditions but fails to open when pressure exceeds its setpoint is not functionally safe. Neither is a braking system that performs correctly until a sensor goes open-circuit and the system interprets silence as a valid signal.

The formal definition comes from IEC 61508:2010, the international electrotechnical standard for functional safety of electrical, electronic, and programmable electronic (E/E/PE) safety-related systems. IEC 61508 defines functional safety as “part of the overall safety relating to the EUC [equipment under control] and the EUC control system that depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.”

That definition is dense. The operational meaning is this: functional safety asks whether your system reduces risk to a tolerable level when something goes wrong, and whether you can prove it.


The Standard Family: IEC 61508 and Its Domain Derivatives

IEC 61508 is intentionally generic. It covers any industry sector where E/E/PE systems perform safety functions. From it, domain-specific standards have been derived — each calibrated to the hazard profiles, development contexts, and regulatory environments of a particular industry.

ISO 26262 — Automotive

Published by ISO and now in its second edition (2018), ISO 26262 applies to electrical and electronic systems in road vehicles with a maximum gross vehicle mass up to 3,500 kg (a separate part covers heavy vehicles). It introduces Automotive Safety Integrity Levels (ASIL), rated A through D, where ASIL D represents the highest integrity requirement. ASIL is determined through a combination of three parameters assessed during hazard analysis and risk assessment (HARA): severity of potential harm, exposure to the hazardous situation, and controllability by the driver or other road users. The product of those parameters maps to an ASIL or, where risk is sufficiently low, a QM (quality management) designation that carries no specific ISO 26262 requirements beyond normal engineering practice.

ISO 26262 also introduces the concept of ASIL decomposition: splitting a safety goal into two independent sub-requirements with lower individual ASIL ratings, provided the two paths are sufficiently independent. This is a design strategy, not a loophole — the independence argument must be made rigorously.

IEC 62304 — Medical Device Software

IEC 62304 covers the software lifecycle processes for medical device software, and applies to software that is itself a medical device or software embedded in a medical device. It defines three software safety classes — Class A (no injury or damage possible), Class B (non-serious injury possible), and Class C (death or serious injury possible) — each requiring progressively more rigorous development, verification, and maintenance processes. IEC 62304 is tightly coupled to ISO 14971, the risk management standard for medical devices, which provides the hazard analysis that feeds class assignment. Unlike automotive or process industry standards, IEC 62304 does not use probability-based targets; it specifies process requirements and architectural constraints.

IEC 61511 — Process Industries

IEC 61511 addresses the functional safety of safety instrumented systems (SIS) in the process industry sector — oil and gas, chemical, refining, and similar. It applies the SIL framework from IEC 61508 but adapts it to the specific context of process plant safety instrumented functions (SIFs). A key addition in the 2016 edition is the requirement for a security risk assessment to address cybersecurity threats to SIS, recognizing that a digitally compromised SIS is a safety risk regardless of its hardware integrity.

EN 50128 — Rail

EN 50128 (CENELEC, with IEC equivalent IEC 62279) governs software for railway control and protection systems. Like IEC 62304, it is a software-specific standard, and it maps to SIL 0 through SIL 4 using the IEC 61508 framework. EN 50128 is notable for its explicit, detailed tables of recommended, highly recommended, and mandatory techniques and measures for each SIL level — making it one of the most prescriptive standards in the family.


Safety Integrity Levels: What SIL and ASIL Actually Mean

Safety Integrity Level (SIL) is a discrete level (1 through 4 for IEC 61508; 0 through 4 for EN 50128 and IEC 61511) that specifies a target measure of risk reduction for a safety function. Higher SIL means more risk reduction required. SIL 4 demands a probability of failure on demand (PFD) between 10⁻⁵ and 10⁻⁴ for low-demand functions. SIL 1 tolerates PFD between 10⁻² and 10⁻¹.

Two common misconceptions deserve direct correction.

SIL is not a product rating. A sensor or PLC cannot be “SIL 2 certified” in isolation. Manufacturers can claim that their components have been assessed and found suitable for use in SIL 2 applications, but SIL is a property of a complete safety function implemented in a system, achieved through the combination of hardware reliability, architectural constraints, software integrity, and the operational and maintenance regime.

SIL is a target, not a checkbox. Meeting a SIL requires a structured argument — typically a safety case — supported by quantitative reliability analysis (fault tree analysis, FMEA, or Markov models), software integrity evidence (verification and validation records), and proof of process compliance across the safety lifecycle. An assessor or notified body evaluates whether the argument holds. “We used SIL 2 components” is not an argument.

ASIL operates similarly but uses a different scale calibrated to automotive hazards. ASIL A is the lowest requirement; ASIL D is the highest. An ASIL D safety goal — for example, “the vehicle shall not apply unintended maximum braking” — demands the most stringent combination of architectural and software measures, independent verification, and design diversity.


The Safety Lifecycle: What the Standards Actually Require You to Do

Every standard in the IEC 61508 family mandates a safety lifecycle — a structured sequence of activities from concept through decommissioning, with defined inputs, outputs, and reviews at each stage. The specific phases vary by standard, but the logical spine is consistent across all of them.

1. Concept and Scope Definition Define the system boundary. What is the equipment under control? What is the intended function? What are the operating modes and environments?

2. Hazard Analysis and Risk Assessment Identify hazardous events, assess their severity and likelihood, and assign tolerable risk targets. In ISO 26262 this is the HARA; in IEC 61511 it is the process hazard analysis (PHA). The output is a set of safety goals (ISO 26262) or safety requirements for the SIS (IEC 61511) — top-level requirements that the system must satisfy to reduce risk to tolerable levels.

3. Functional Safety Requirements Translate safety goals into functional requirements for the safety-related system. At this stage, the requirements describe what the system must do, not how. ASIL or SIL allocations are assigned here.

4. System Architecture and Technical Safety Requirements Allocate functional safety requirements to hardware and software elements. Define the system architecture. Derive technical safety requirements (TSR) for each element — the implementable, verifiable requirements that flow to hardware engineers, software developers, and systems integrators.

5. Design and Implementation Develop hardware, software, and integrated systems in accordance with the technical safety requirements and the standard’s recommended techniques.

6. Verification and Validation Verify that each element meets its requirements. Validate that the system as a whole achieves its safety goals. This is not a single gate at the end — it is continuous throughout the lifecycle.

7. Functional Safety Assessment An independent assessment (internal or external, depending on the ASIL/SIL level) evaluates whether functional safety has been achieved. The assessor examines the complete safety case: arguments, evidence, and the traceability that connects them.

8. Production, Operation, and Maintenance Maintain functional safety through the operational life of the system, including management of changes, field failures, and eventual decommissioning.

The critical architectural requirement threading through all of these phases is bidirectional traceability: every safety goal must trace forward to the functional safety requirements that implement it, and every technical safety requirement must trace backward to the functional safety requirement that derives it — and ultimately to the hazard it mitigates. Gap analysis is not optional. An assessor will look for it.


How Modern Tools Implement Safety Traceability

Managing this traceability in spreadsheets or word-processed documents is technically possible for small systems. For any program of meaningful complexity — a vehicle platform with dozens of safety goals, hundreds of functional safety requirements, and thousands of technical safety requirements across multiple suppliers — it becomes an audit liability and an engineering hazard in its own right.

Document-based tools like IBM DOORS and DOORS Next have long been the industry standard for requirements management in safety-critical programs. They provide structured requirements capture, link management, and change history. Their strength is maturity and regulatory familiarity — assessors know what a DOORS export looks like. Their limitation, particularly in complex multi-domain programs, is that traceability is link-by-link through documents, making impact analysis and coverage reporting laborious and error-prone as the design evolves.

Flow Engineering takes a different architectural approach. It represents the entire requirements model — hazard analyses, safety goals, functional safety requirements, technical safety requirements, verification cases — as a connected graph, where relationships between nodes are first-class objects rather than document cross-references. This matters for safety programs in several concrete ways.

First, coverage gaps surface automatically. In a graph model, a functional safety requirement with no upward link to a safety goal, or no downward link to any technical safety requirement, is an orphan — visible immediately without manual tracing. In a document model, that gap requires someone to run a compliance matrix and notice the empty cell. Flow Engineering makes the gap structural, not observational.

Second, change impact is computable. When an ASIL D safety goal is revised following a HARA update — a common occurrence as the system design matures — Flow Engineering can traverse the graph and identify every functional and technical safety requirement that derives from it, every verification case that covers it, and every design element allocated to it. That impact set is the working list for the safety engineer managing the change. In a document-based system, assembling that list is hours of manual cross-referencing.

Third, the AI-native tooling in Flow Engineering supports the authoring of safety requirements themselves. Requirements that are ambiguous, non-verifiable, or that duplicate an existing requirement are surfaced during authoring, not during a review cycle weeks later. For safety-critical requirements — where a poorly worded requirement can lead to a genuine gap in the safety argument — early-stage quality feedback reduces rework and the risk of propagating a defective requirement through decomposition.

Flow Engineering is purpose-built for hardware and systems engineering teams working in exactly this domain. Its intentional focus is on the requirements model and traceability layer — it is not a full product lifecycle management (PLM) suite, and teams that need integrated BOM management, manufacturing process planning, or configuration management will connect Flow Engineering to PLM and ERP systems for those functions. That is a deliberate architectural boundary, not a gap in the safety-relevant workflow.


Practical Starting Points

If your team is beginning a functional safety program, or inheriting one that has accumulated traceability debt, three practices consistently distinguish programs that pass assessment from those that struggle.

Start the safety lifecycle at concept, not design. Hazard analysis conducted after the architecture is fixed is a compliance exercise, not an engineering activity. Safety goals that arrive after ASIL allocation has been made informally tend to require retroactive justification rather than driving genuine design decisions.

Treat traceability as a live artifact, not a deliverable. Requirements traceability matrices produced at program milestones for assessors are necessary but not sufficient. The working traceability model should reflect the current state of the design at all times. When it doesn’t — when the RTM is three months behind the actual requirement set — the gap is where the audit findings live.

Assign SIL and ASIL through the process, not before it. Teams under schedule pressure sometimes assign ASIL targets before completing the HARA, reasoning that they’ll validate the assignment later. The assignment should be the output of the risk assessment, not an input to it. An assessor will examine the HARA and the rationale for every ASIL assignment.

Functional safety is not a checklist, and the standards are not pass/fail quizzes. They define a discipline — systematic hazard reasoning, structured decomposition, rigorous traceability — that, applied genuinely, produces systems that fail safely. The standards codify what good engineering for life-critical systems looks like. The tools support that engineering, but they do not replace the judgment behind it.