The Question Behind the Question

When engineers ask how to decompose requirements, they usually mean one of two things: either they have a pile of stakeholder statements and don’t know how to structure them, or they have a structured model that looks correct on paper but keeps producing surprises during integration. Both problems have the same root cause. Decomposition is treated as a documentation exercise rather than as an engineering reasoning process.

This article works through a realistic example — a defense unmanned aerial system (UAS) with an intelligence, surveillance, and reconnaissance (ISR) mission — and shows what the decomposition actually looks like at each level: what you’re deciding, what artifacts you’re producing, and what the failure modes look like when the process breaks down.


The Starting Point: A Stakeholder Requirement

Stakeholder requirements live at the top of the decomposition hierarchy. They describe what the customer or operator needs, in operational terms, without specifying mechanism. They are almost never verifiable as written — that’s not a bug, it’s by design. Verification is something you attach to requirements at lower levels.

Here is a realistic Level 0 (stakeholder) requirement for an ISR UAS:

STK-007: The system shall provide continuous full-motion video of a designated ground target for a minimum of 45 minutes without interruption, at a ground sample distance (GSD) of 0.3 meters or better, under winds up to 35 knots and ambient temperatures between −20°C and +50°C.

This is a well-formed stakeholder requirement. It states an operational capability, includes measurable performance bounds, specifies environmental conditions, and says nothing about how the capability is achieved. Every word is doing work.

The job of decomposition is to systematically answer: what must the system do, and do well enough, and interact with correctly, for STK-007 to be achievable?


Level 1: Functional Decomposition

The first decomposition move is to identify the functions the system must perform to satisfy the stakeholder requirement. Functions are behaviors, not components. You are not yet saying what hardware performs the function.

For STK-007, the top-level functions are:

IDFunctional Requirement
SYS-F-001The system shall maintain line-of-sight orientation of the imaging payload toward the designated target throughout the observation window.
SYS-F-002The system shall acquire and continuously stream full-motion video from the imaging payload to the ground station.
SYS-F-003The system shall sustain airborne flight for a minimum of 60 minutes under the specified environmental envelope. (60 min = 45 min observation + margin for transit and re-acquisition.)
SYS-F-004The system shall accept and execute target designation updates from the ground control station within 5 seconds of receipt.

Notice what’s happening:

  • SYS-F-003 adds derived content — the 60-minute endurance number doesn’t appear in STK-007 but is derived from it when you account for transit time and operational margin. That derivation must be documented and traceable.
  • SYS-F-004 captures a behavioral requirement (responsiveness to updates) that a naive reading of STK-007 might miss entirely. “Continuous” coverage implies recoverability, which implies command latency must be bounded.

This is where decomposition begins to reveal requirements that were implicit in the stakeholder statement.


Level 2: Performance Allocation

Functional requirements tell you what. Performance requirements tell you how well. The decomposition from Level 1 to Level 2 allocates a performance budget across subsystems. This is where physics and systems engineering intersect.

Take SYS-F-001 (payload pointing). To achieve 0.3m GSD at operational altitude, the imaging system needs to know where it’s pointing with high angular precision. That budget gets allocated:

IDAllocated Performance RequirementAllocated To
EO-P-001The EO/IR payload shall maintain pointing stability of ±0.05° or better during active imaging.Payload subsystem
GNC-P-001The GNC subsystem shall provide inertial attitude estimates with accuracy of ±0.03° (1σ) at 100 Hz update rate.GNC subsystem
GIM-P-001The gimbal subsystem shall provide angular stabilization with residual error not exceeding ±0.02° under platform angular rates up to 15°/s.Gimbal subsystem

The numbers sum to the top-level pointing budget (roughly). The allocation decision should be documented as a trade — why these numbers, not others? That rationale is part of the requirements model, not a footnote.

What makes this good: Each child requirement is verifiable independently. EO-P-001 can be tested on a bench with a target and angular measurement fixture. GNC-P-001 can be validated against truth data. GIM-P-001 can be measured on a vibration table. None of them requires the full system to be assembled to verify.


Level 3: Interface Requirements

Interface requirements describe the interactions between subsystems. They are the most structurally neglected class of requirement on hardware programs, and their neglect is responsible for a disproportionate share of integration failures.

For the payload-gimbal-GNC chain, the interface requirements look like this:

IDInterface RequirementInterface
IF-001The GNC subsystem shall output inertial attitude quaternions to the gimbal controller on the vehicle LAN at 100 Hz with latency not exceeding 5 ms.GNC → Gimbal
IF-002The gimbal subsystem shall accept pointing commands from the mission processor in the form of NED-referenced azimuth/elevation angles, encoded per ICD-GIM-001.Mission Processor → Gimbal
IF-003The payload shall provide a frame-synchronized trigger signal to the gimbal controller at the start of each imaging frame, with timing accuracy of ±1 ms.Payload → Gimbal
IF-004The EO/IR payload shall output H.264 encoded video over Ethernet at a sustained rate of 30 fps at 1080p resolution to the mission processor.Payload → Mission Processor

Each interface requirement must be owned. That means it appears in the requirements baseline of one subsystem (typically the receiver, not the sender, though programs differ) and is traceable to the functional and performance requirements it enables.

On programs where interface requirements are only captured in ICDs and not in the requirements model, they become invisible to traceability analysis. They don’t appear in coverage reports. They don’t get formally verified. They get “checked” informally during integration, which is not verification.


Level 4: Verification Requirements

Every performance and interface requirement needs a verification requirement that specifies method, acceptance criterion, and level of assembly. This is where the requirements model closes back on itself.

IDVerification RequirementVerifiesMethod
VER-001Pointing stability shall be verified by commanding the system to track a stationary target at operational altitude equivalent and measuring image displacement across 500 consecutive frames. Pass criterion: RMS displacement ≤ 0.05°.EO-P-001Test
VER-002GNC attitude accuracy shall be verified by post-processing flight log data against a reference IMU truth source. Pass criterion: ±0.03° (1σ) across 10 representative flight segments.GNC-P-001Test + Analysis
VER-003GNC-to-gimbal message latency shall be verified by injecting timestamped test messages and measuring end-to-end delivery time across 10,000 samples. Pass criterion: 99th percentile latency ≤ 5 ms.IF-001Test

Verification requirements are often treated as downstream tasks — something you figure out after the design is done. That’s a mistake. Writing the verification requirement at the same time as the performance requirement forces precision. If you can’t write a concrete test for a requirement, the requirement isn’t well-formed.


What Makes Decomposition Break Down

The worked example above looks clean. Real programs don’t look like this. Here are the failure modes that explain the gap.

Jumping to Design

The most common decomposition failure is writing mechanism where you should be writing behavior. Instead of SYS-F-002 (“the system shall acquire and stream full-motion video”), you see: “The system shall use an H.264 codec with a minimum bitrate of 8 Mbps delivered over MIL-STD-1553B.”

The problem: MIL-STD-1553B has a 1 Mbps bandwidth ceiling. An 8 Mbps video stream cannot physically travel over a 1553 bus. Someone wrote a design decision as a requirement before doing the analysis. Now those two requirements are in conflict and nobody owns the resolution because they both look like “shall” statements.

Design decisions belong in the design artifact, linked to the requirement they satisfy. Requirements describe the need. Conflating them makes both the requirement and the design harder to change.

Missing Failure Modes as Requirements

STK-007 says “continuous” video coverage. What does the system do when the RF link to the ground station drops? When the gimbal hits a mechanical stop? When the target moves outside the imaging field of view?

If the required failure behavior is not specified, there is no requirement to design to and no criterion to verify against. The failure behavior that ends up in the product is whatever the engineer decided on the day they wrote the error handler. This is the origin of integration surprises: the failure mode was real, it just wasn’t specified.

Failure modes belong in the requirements model as explicit requirements — typically derived from a preliminary hazard analysis or FMEA run early in the program. “The system shall transition to a safe hold mode and alert the operator within 3 seconds upon detection of RF link loss” is a real requirement. It has a failure condition, a response behavior, a timing bound, and a trigger — all verifiable.

Interface Requirements Living Nowhere

On programs that use separate tools for requirements and for interface control, interface requirements fall into a gap between the two. They’re “in the ICD,” which is a document, not a model artifact. They don’t appear in the requirements database. They don’t show up in traceability reports. They don’t get a verification status.

The consequence is not that they go unimplemented — engineers usually read the ICD. The consequence is that when the ICD changes late in the program, nobody knows which requirements are affected. There’s no automated impact analysis. The change is propagated manually, incompletely, under schedule pressure.

Incomplete Allocation

When a top-level performance requirement is decomposed into subsystem allocations, the allocations must sum correctly. If the pointing budget at the system level is ±0.1°, the contributions from payload, gimbal, and GNC must account for the full budget under the specified combination rule (RSS, worst-case sum, etc.).

Programs routinely have requirements that are allocated to one subsystem but not to the others that contribute to the same performance parameter. The result is a performance gap that appears at integration and can’t be resolved without reopening requirements — at the worst possible time.


How Modern Tools Support Structured Decomposition

The failure modes above are not primarily caused by engineers making bad decisions. They’re caused by the friction of maintaining a large, multi-level requirements model in tools that were designed around document metaphors. When your requirements live in a flat database with manual link fields, maintaining traceability across four levels of decomposition, across subsystem boundaries, across interface documents, is a labor-intensive activity that competes with design work and loses.

This is the problem that graph-based, AI-native tools are built to address. Flow Engineering is built specifically for hardware and systems engineering programs and takes a model-first approach: requirements exist as nodes in a connected graph, with relationships — decomposition, satisfaction, verification, interface allocation — as typed edges.

In a graph model, the completeness properties that matter become computable. Flow Engineering’s AI layer can flag a top-level functional requirement that has no child performance requirements. It can identify a performance parameter that is allocated to one subsystem but not to all the subsystems that physically contribute to it. It can surface interface edges that are unowned — present as a connection between nodes but not assigned to a requirements owner in either subsystem’s baseline.

What this means in practice: the gap-finding work that currently happens at PDR, when engineers walk through thick requirements documents looking for holes, becomes a continuous background activity. An engineer working on GNC performance allocation gets a flag that the pointing budget at the system level hasn’t been updated to reflect the GNC allocation they just entered. The flag appears in the tool, not six weeks later in a review.

Flow Engineering is deliberately focused on hardware programs that operate in regulated or defense environments — the tool is not a general-purpose requirements manager. Teams that need tight JAMA-style process auditability for medical device submissions, for example, will find the workflow different. But for defense systems programs running MBSE-adjacent processes and trying to close the gap between requirements and design models, the structured decomposition support and the AI-assisted gap analysis are directly targeted at the failure modes described in this article.


Practical Starting Points

If you’re working on a hardware program right now and the decomposition is in trouble, the most valuable immediate actions are:

1. Assign ownership to every interface requirement. Audit your current baseline. Every interface “shall” that lives only in an ICD or in a systems architecture diagram needs to be instantiated as a named requirement in one subsystem’s baseline, with a bidirectional trace.

2. Write the verification method before closing any performance requirement. If the team can’t articulate a test or analysis method for a requirement at the time they write it, it’s a signal the requirement isn’t yet well-formed — not a reason to defer verification planning.

3. Separate failure modes into explicit requirements during early hazard analysis. Don’t wait for the FMEA to be complete. As soon as a failure mode is identified that requires a specific system response, write a requirement. The verification planning can follow.

4. Check allocation completeness explicitly at each decomposition level. Before calling a requirements review done, verify that every top-level performance parameter has a complete allocation across all contributing subsystems, with a documented combination rule.

Good decomposition doesn’t require perfect tooling. It requires treating requirements as an engineering reasoning artifact — something you use to make and record decisions — rather than as documentation you produce to satisfy a process gate. The tools that make that easier are worth knowing about. The discipline is the thing that makes them work.