Cybersecurity Requirements in Hardware Products: A Different Kind of Systems Engineering Problem

Most systems engineering problems are bounded. You define requirements, allocate them to subsystems, build verification evidence, and close the loop. The product ships, the requirements baseline is archived, and the traceability matrix is a record of what happened.

Cybersecurity does not work that way.

The threat environment your product faces on day one of deployment is not the threat environment it will face in year three. Vulnerabilities are discovered in components you didn’t build. Attack techniques evolve to exploit design patterns that were considered safe when you made them. Regulatory bodies issue new guidance mid-product-lifecycle. And unlike a mechanical failure mode — which is a fixed physical characteristic of the product — a software or firmware vulnerability is an adversarial finding, meaning someone is actively looking for it.

That does not mean cybersecurity requirements are unmanageable. It means they require a different systems engineering discipline than most hardware teams apply by default.


The Core Problem: Requirements That Depend on Inputs You Don’t Have Yet

In traditional systems engineering, requirements flow downward from stakeholder needs and operational concepts. You can, in principle, write a complete requirements set before any design work begins. The V-model assumes this. So does most requirements management tooling.

Cybersecurity requirements violate this assumption in two ways.

First, they depend on a threat model, and the threat model depends partly on the design. You cannot identify all relevant threats until you know what assets the system holds, what interfaces it exposes, what components it uses, and how it connects to the operational environment. Some of that information doesn’t exist at requirements inception. A security requirement like “the device shall authenticate all firmware update requests using cryptographically signed certificates” only makes sense once you know that firmware updates are a feature, that they happen over a network interface, and that unauthenticated updates are a viable attack path. Threat modeling — using structured methods like STRIDE, TARA (Threat Analysis and Risk Assessment per ISO/SAE 21434), or PASTA — is a requirements input, not a downstream verification activity.

Second, threats evolve after the product ships. A requirement that was sufficient at design freeze may be inadequate two years into production. This is not a defect in the original engineering. It is the nature of adversarial systems. Managing this reality requires that requirements — and the traceability from requirements to design decisions to verification evidence — remain accessible and updateable throughout the product’s operational life, not just during development.


Two Categories of Security Requirements That Must Not Be Conflated

One of the most common structural errors in cybersecurity requirements management is treating all security requirements as a single list. They are not. There are two fundamentally different types, and conflating them causes problems at verification, at compliance, and in the field.

Security Function Requirements

These describe what the system does to achieve security. They are operational. Examples:

  • The device shall enforce a minimum password length of 12 characters for all user accounts.
  • The system shall encrypt all data transmitted over external network interfaces using TLS 1.3 or later.
  • The controller shall log all failed authentication attempts with a timestamp and source identifier.
  • Firmware shall be verified against a hardware root of trust before execution.

Security function requirements are testable in the traditional sense. You can build a test procedure, execute it, produce evidence, and close a verification record. They fit naturally into a standard requirements management workflow.

Security Assurance Requirements

These describe the confidence level you need to have in your security claims — and the processes, analyses, and evidence required to achieve that confidence. They are often derived from standards and from the product’s target assurance level. Examples:

  • A STRIDE threat analysis shall be conducted on all external interfaces and reviewed by a qualified security engineer.
  • The development environment shall enforce separation between production code repositories and test infrastructure.
  • Penetration testing shall be conducted by an independent team prior to each major firmware release.
  • A software bill of materials (SBOM) shall be maintained and updated with each build, identifying all third-party components and their versions.

Assurance requirements do not describe system behavior. They describe the engineering process and the evidence that process must produce. They are verified through audits, documentation reviews, and process assessments — not by running a test against the product. IEC 62443-4-1 (secure development lifecycle requirements for product suppliers) is almost entirely an assurance standard. NIST SP 800-53 mixes both types, which is part of why applying it naively produces compliance theater rather than security.

Managing these two categories separately — with explicit traceability between assurance requirements and the function requirements they support — is a prerequisite for coherent security engineering at scale.


Compliance Is a Floor, Not a Destination

IEC 62443, NIST SP 800-53, the EU Cyber Resilience Act, FDA cybersecurity guidance for medical devices, and similar frameworks serve a real purpose. They represent accumulated industry knowledge about common failure modes. Compliance demonstrates a baseline level of security engineering discipline to customers, regulators, and auditors.

They are not sufficient.

The reason is structural. Standards are written by committees, reviewed over multi-year cycles, and published at a point in time. The threat environment moves faster than the standards process. A product that satisfies every control in a published framework may still be vulnerable to attack techniques that postdate the standard, or to threat vectors specific to its operational environment that no generic framework could anticipate.

The practical implication for systems engineers: build your security requirements from the threat model first, then map to compliance frameworks second. The threat model tells you what you actually need to defend against. The compliance mapping tells you whether you’ve met your contractual and regulatory obligations. Both matter, but in that order. Running the process in reverse — starting with the control catalog and working backward — produces a requirements set that is auditable but not necessarily protective.


Post-Market Monitoring as a Systems Engineering Obligation

This is where the difference between cybersecurity and almost every other engineering discipline becomes starkest.

For a mechanical component, the design is fixed at production. Post-market surveillance monitors for failures in the field, but the failure modes are bounded by physics. For a software-intensive product, vulnerabilities are discovered continuously after shipment — in your own code, in third-party libraries, in underlying operating systems and communication stacks. A component that has no known vulnerabilities on ship date may have five Critical CVEs filed against it within 18 months.

In medical devices, the FDA’s 2023 cybersecurity guidance and Section 524B of the FD&C Act (added by the Consolidated Appropriations Act of 2023) create explicit regulatory obligations for post-market monitoring. Manufacturers must have a plan for monitoring, identifying, and addressing cybersecurity vulnerabilities in fielded devices. This is not optional guidance. It has 510(k) and PMA implications.

In critical infrastructure, IEC 62443-2-3 addresses patch management for asset owners and operators. For product suppliers, this creates expectations around how they will communicate vulnerability information and support remediation throughout the product lifecycle.

The systems engineering implication is this: the requirements baseline and traceability architecture you build during development become operational assets, not just development artifacts. When a new vulnerability is discovered in a fielded product, you need to answer: Which design decisions were made based on an assumption that this component was secure? Which requirements will need to be revised? Which verification activities will need to be repeated? If your requirements exist as a closed document archive, answering those questions requires reconstructing the architecture from scratch. If they exist as a living, traceable model, you can navigate directly to the affected requirements and the downstream implications.


The Safety-Security Interaction Problem

Hardware products in regulated industries — medical devices, industrial control systems, automotive electronics, aerospace subsystems — carry both safety requirements and security requirements. These disciplines are related but not identical, and they interact in ways that can produce dangerous engineering oversights when managed in separate silos.

Consider a device that uses a software watchdog to detect processor lockup conditions and trigger a safe-state response. That watchdog is a safety mechanism. If an attacker can send a message that suppresses or manipulates the watchdog without being authenticated, the security gap is also a safety failure path. The safety requirements say the watchdog must work. The security requirements say the watchdog must be protected. Neither discipline, managed independently, necessarily surfaces the interaction.

This is why modern security-safety co-engineering approaches — reflected in IEC 62443 for industrial systems and in the emerging ISO 21434 / IEC 62443 alignment guidance for automotive-adjacent products — require shared architectural models, not just parallel requirement lists with a cross-reference spreadsheet appended.

The practical starting point is to treat your safety and security requirements as residents of the same architecture model, with explicit relationships between them. Where a security control protects a safety mechanism, that relationship should be traceable. Where a safety response might create a security-exploitable state (some safe-state behaviors are predictable, which makes them useful to attackers), that should be flagged as a design consideration.


How Modern Tooling Supports This Discipline

The requirements management tooling most hardware teams have inherited was designed for document-based processes: hierarchical requirement lists, manual traceability matrices, change logs managed through version-controlled documents. That infrastructure is not wrong, but it creates friction at exactly the points where cybersecurity requirements management needs agility — updating requirements in response to new threat intelligence, maintaining live traceability between security and safety requirements, and navigating the architecture to assess post-market impact quickly.

Flow Engineering (flowengineering.com) handles security requirements with the same graph-based traceability discipline it applies to safety requirements, which means the two disciplines can coexist in a single coherent architecture model rather than in separate document silos. When a new threat is identified and a security requirement needs updating, the downstream implications — to design decisions, to verification activities, to related safety requirements — are navigable rather than reconstructed. For teams operating under post-market monitoring obligations in medical devices or critical infrastructure, that navigability is the difference between a manageable update process and a multi-week forensic exercise.

The deliberate focus is on requirements and traceability, not on threat modeling tools or vulnerability databases — those integrate as inputs. For organizations that need a single integrated platform for complete security lifecycle management including asset inventory and CVE tracking, that scope is outside what Flow Engineering is designed to do, and dedicated security operations platforms serve that need.


A Practical Starting Point for Teams Getting This Right

If your team is applying standard systems engineering methods to cybersecurity requirements without modification, these are the concrete adjustments that will close the most significant gaps:

1. Conduct threat modeling as a requirements input activity, not a verification activity. Schedule a STRIDE or TARA session at the point when system interfaces and major components are defined but before detailed design begins. Feed the outputs directly into your security requirements baseline.

2. Separate function requirements from assurance requirements in your requirements structure. Use distinct requirement types or a clear naming convention. Verify them through different methods. Trace assurance requirements to the function requirements they support.

3. Map to compliance frameworks after building from the threat model. Use the compliance mapping as a gap check, not as your primary requirements source.

4. Plan the post-market traceability architecture during development. Decide now how you will navigate from a fielded CVE back to the requirements and design decisions it affects. Build that architecture before you need it.

5. Create explicit traceability between security requirements and safety requirements wherever the two interact. This does not require merging the disciplines — it requires acknowledging the intersections.

The goal is not a perfect security requirements specification at design freeze. That is not achievable for an adversarial, evolving threat environment. The goal is a requirements architecture that is honest about what is known, traceable enough to navigate when new information arrives, and structured to support the full lifecycle — including the years after the product ships.