Requirements decomposition is the process of transforming high-level mission objectives — what the system needs to do — into the detailed, measurable, implementable specifications that hardware and software engineers can build and verify against.
It sounds straightforward. In practice, it’s one of the hardest parts of systems engineering, and poor decomposition is the root cause of a surprisingly large fraction of late-stage integration failures.
The Decomposition Hierarchy
Most systems engineering frameworks use three to five levels of requirements hierarchy, though the specific names vary by domain:
Mission/Stakeholder Requirements — What the customer or operator needs the system to do. Often qualitative or operational: “The system shall allow an operator to track 200 simultaneous targets within the operational coverage area.” These are the top of the hierarchy and the ultimate measure of system success.
System Requirements — The technical specification of what the system must do to satisfy mission requirements. More precise, more measurable, but still at the whole-system level: “The system shall track a minimum of 200 targets simultaneously with a position update rate of no less than 1 Hz and a position accuracy of ≤10m RMS within the specified coverage volume.”
Subsystem/Functional Requirements — Requirements allocated to specific subsystems, derived by distributing system-level performance and functionality. The radar subsystem gets a detection requirement. The processing subsystem gets a tracking algorithm requirement. The communication subsystem gets a latency requirement. Each must be satisfied for the system requirement to be met.
Component/Derived Requirements — Requirements for specific hardware and software components, derived from subsystem requirements. Specific enough to be the direct basis for design and verification without additional interpretation.
Interface Requirements — Requirements that define the boundaries between components and subsystems. Often treated as a parallel hierarchy rather than a level in the main hierarchy, but traceability to both sides of the interface is essential.
What Good Decomposition Looks Like
Each level of the hierarchy should add something beyond restating the parent requirement at smaller scale. Specifically, decomposition should:
Allocate performance budgets. If the system requirement specifies end-to-end latency of 50ms, the decomposition needs to allocate that budget across subsystems — perhaps 15ms for sensor processing, 20ms for data transport, 15ms for display rendering. This allocation is a design decision, not just a specification activity.
Increase measurability. Mission requirements are often operational statements that aren’t directly verifiable (“the system shall be easy to operate”). Subsystem requirements should be specific enough to be directly verifiable against a test method.
Specify interfaces. As you decompose, the interfaces between components become explicit requirements. These are often where integration failures originate — requirements that were clear within a subsystem but left implicit at the boundary between subsystems.
Capture derived requirements. Some subsystem requirements arise from design decisions rather than direct derivation from system requirements — they’re “derived requirements” that result from how you chose to implement the system. These need to be in the model with explicit derivation rationale, not left implicit.
The Hard Part: Performance Budget Allocation
The most technically demanding part of requirements decomposition is allocating system-level performance requirements to subsystem budgets when the system behavior emerges from the interaction of subsystems.
A 10m end-to-end position accuracy requirement doesn’t simply divide by the number of subsystems in the signal chain. Each subsystem contributes error in a statistically independent way (usually), so the allocation involves error budgeting — root-sum-square allocation of error budgets across contributing subsystems.
A 50ms end-to-end latency requirement does divide across subsystems, but not necessarily uniformly. The allocation reflects the relative design freedom available in each subsystem, the state of the art in component performance, and engineering judgment about where budget is cheapest to spend.
These allocation decisions are engineering decisions with real consequences. Over-constrain a subsystem and you make it impossible to build. Under-constrain it and you move the problem to integration, where it’s much more expensive to fix.
Poor allocation is the source of a specific pattern of late-stage integration failure: every subsystem was built to spec, every subsystem passed its unit-level verification, but the integrated system fails to meet system requirements because the requirements were allocated in a way that didn’t account for how errors would combine.
Completeness and Quality
The other dimension of decomposition quality is completeness — making sure the full set of system requirements is decomposed, not just the ones that are easy to decompose.
Common incompleteness patterns:
Safety and regulatory requirements not decomposed. System-level safety requirements need to flow down to the subsystem and component level. The system-level “shall not emit RF in the prohibited frequency bands” needs to decompose to emission requirements for every RF-emitting component, not just appear at the top.
Non-functional requirements orphaned. Reliability, maintainability, environmental, and electromagnetic compatibility requirements are frequently not decomposed. They appear at the system level and then either disappear or are restated verbatim at every level — neither of which produces implementable component specifications.
Interface requirements implied but not stated. The most common source of integration failure is interface requirements that were obvious to the people who wrote the system requirements but never explicitly stated at the interface between subsystems.
Derived requirements undocumented. Design decisions that create new requirements on subsystems need to be documented and connected to the decisions that created them. Undocumented derived requirements become invisible constraints that surprise engineers who weren’t in the room when the decision was made.
AI-Assisted Decomposition
The blank-page problem in requirements decomposition — staring at a high-level requirement and figuring out what subsystem requirements it implies — is one of the areas where AI assistance provides genuine value.
AI-assisted decomposition tools can:
Generate first-pass decompositions. Given a system-level requirement and system context, an AI model can propose candidate subsystem requirements covering the main performance dimensions, interface considerations, and derived requirements that commonly arise for that type of system. This isn’t a replacement for engineering judgment but substantially reduces the effort of going from nothing to a reviewable draft.
Flag quality issues. Requirements that lack measurability, contain ambiguous verbs (“sufficient,” “appropriate,” “user-friendly”), mix multiple requirements in a single statement, or use undefined terms can be flagged automatically. Catching these during authoring is far cheaper than finding them in review.
Check completeness. Given a decomposition, AI tools can compare the coverage of parent requirement dimensions against child requirements and surface areas that appear underspecified.
Maintain consistency. As requirements change, AI tools can surface child requirements that may need updating based on changes to parent requirements — a form of impact analysis applied to the decomposition hierarchy specifically.
The value of these capabilities scales with system complexity. For systems with hundreds or thousands of requirements in deep hierarchies, maintaining decomposition quality manually is genuinely hard. AI assistance at each step compounds into significant time savings and fewer quality issues making it to late-stage review.
The Connection to Verification
Every requirement at the bottom of the decomposition hierarchy needs a verification method — analysis, test, inspection, or demonstration. Requirements that can’t be verified are design fiction, not specifications.
Working through the verification method for each component requirement is a useful quality check on the decomposition itself. If you can’t figure out how to verify a requirement, it’s often because the requirement is too vague, improperly allocated (the component can’t produce the observable behavior the requirement calls for), or implicitly dependent on a requirement that hasn’t been stated.
Good decomposition and good verification planning are interleaved activities. Teams that treat them as sequential — finish decomposition, then figure out verification — consistently find that the decomposition is incomplete in ways that verification planning would have caught earlier.