What Is a System Architecture?

Ask ten engineers what “architecture” means and you will get ten different answers. Some will sketch a block diagram. Some will describe a document. Some will point to a folder of PowerPoint slides. A few will say it is the high-level design, as opposed to the detailed design — though pressed to define the boundary, they struggle.

The ambiguity is not harmless. When architecture is treated as a colloquial term for “the big picture,” architectural decisions get made informally, their rationale disappears, and the requirements they generate — the derived requirements that downstream teams must satisfy — go uncaptured. Programs run into integration problems that trace back to decisions nobody documented because nobody recognized them as decisions in the first place.

Systems engineering gives the term a precise definition. That precision is worth recovering.

The Formal Definition

The ISO/IEC/IEEE 42010 standard defines system architecture as “the fundamental concepts or properties of a system in its environment embodied in its elements, relationships, and in the principles of its design and evolution.” INCOSE’s Systems Engineering Handbook converges on a similar statement: architecture is the arrangement of system elements, their properties, and the relationships among them that collectively satisfy system requirements.

Unpack that sentence and three things stand out.

Elements. An architecture is always about things — functions, components, nodes, processes, humans, interfaces. Architecture without elements is philosophy.

Properties. Elements have attributes that matter: mass, power consumption, processing capacity, latency, reliability. The architecture must specify which properties are relevant, at what level of abstraction.

Relationships. The connection pattern is as important as the elements themselves. Two systems with identical components but different connection topologies are different architectures. Relationships carry allocation, dependency, interface, and behavioral constraints.

What the definition excludes is equally important. Architecture is not the same as design. Design fills in the how; architecture establishes the what and the why at the level of system structure. Architecture is also not a document. Documents may represent an architecture, but the architecture itself is the set of structural decisions, some of which may be captured in models, some in specifications, and — regrettably — some only in institutional memory.

The Four Major Views

A single diagram cannot represent a system architecture completely. Different stakeholders need to reason about different aspects of the system, and different engineering concerns require different representations. The standard approach is to express architecture through multiple views, each addressing a coherent set of concerns.

Functional View

The functional view answers: what does the system do, and how are those functions decomposed and connected?

Functions are the operations the system must perform, independent of how they are physically implemented. A functional decomposition breaks a top-level function — “process sensor data” — into subfunctions: acquire, filter, calibrate, fuse, output. Each subfunction has inputs, outputs, and performance requirements.

The functional view matters because it drives allocation. You cannot decide which hardware component performs a function until you have clearly defined what the function is. It also exposes interface contracts: the outputs of one function become the inputs of another, and those contracts must be agreed upon before physical design proceeds.

A common failure mode is skipping the functional view and going directly to physical architecture. Teams end up with hardware components whose responsibilities are defined informally, interfaces that exist in one engineer’s head, and integration surprises that could have been caught at the functional level for a fraction of the cost.

Physical View

The physical view answers: what physical elements make up the system, how are they decomposed, and how are they interconnected?

Physical elements are the tangible or deployable things: circuit boards, enclosures, cables, software modules, nodes in a network, line replaceable units. The physical architecture defines hierarchy — which elements are contained within others — and interfaces — which elements exchange what signals, data, power, or mechanical loads through which ports.

The physical view is where most engineers feel at home. The risk is treating it as the only view, which makes functions implicit and allocation informal. When a requirement changes, teams that have only a physical architecture must reconstruct which functions are affected before they can assess impact.

Behavioral View

The functional and physical views are static. They describe structure, not time. The behavioral view answers: how does the system behave dynamically — what sequences of events occur, what states does the system move through, and how do elements interact over time within a single scenario?

Common behavioral representations include sequence diagrams (showing message exchanges between elements in time order), activity diagrams (showing control and data flow through a process), and state machines (showing how an element transitions between modes in response to events).

Behavioral modeling is where many requirements on timing, sequencing, and error handling become visible. A requirement that says “the system shall detect sensor failure within 100 ms” is not fully allocated until you have traced the behavioral path from failure event through detection logic to output — and confirmed that no step in that sequence absorbs the entire budget.

Temporal View

The temporal view is sometimes treated as a subset of behavioral, but it deserves separate attention. It answers: when do functions execute, in what order, and with what timing constraints?

For embedded and real-time systems, temporal architecture is often as constraining as physical architecture. Scheduling decisions — which functions share a processor, what their periods and deadlines are, whether they are interrupt-driven or polled — have direct implications for whether timing requirements can be met. These decisions must be made explicitly and documented, because they generate derived requirements on processor utilization, memory bandwidth, and communication latency that cascade to component specifications.

Why Views Must Be Connected

Each view illuminates something the others obscure. But the views are not independent — they must be consistent with one another. A function defined in the functional view must be allocated to a physical element in the physical view. The behavioral view must be compatible with both the functional decomposition and the physical interconnect topology. Temporal constraints must be satisfiable within the physical and behavioral structure.

When views are maintained in separate tools — or separate files in the same tool — consistency becomes a manual discipline. Engineers check consistency by comparing documents. This works until the system is large enough, or the team is distributed enough, that the comparison becomes impractical. At that point, inconsistencies accumulate silently.

Architecture as a Requirements-Generating Activity

Here is the aspect of architecture that documentation-centric teams most consistently underestimate: every architectural decision is also a requirements-generating event.

Consider allocation. The decision to allocate a filtering function to a dedicated FPGA rather than a shared processor generates a set of derived requirements: the FPGA must meet specific logic density requirements, its interface to the data bus must meet bandwidth requirements, its power envelope must fit within the system power budget allocation. None of these requirements existed in the original system specification. They exist because of the architectural decision.

Consider interface definition. Deciding that two subsystems will exchange data over an Ethernet link generates derived requirements on both subsystems: the transmitting subsystem must format data in a specified protocol, the receiving subsystem must be able to process incoming packets at the required rate, both must handle link failure within specified parameters.

Consider mode definition in the behavioral view. Deciding that the system has a degraded-operation mode generates derived requirements on every subsystem that must behave differently in that mode.

These derived requirements are not optional. They must be satisfied for the system to meet its top-level requirements. If they are not captured, they are not allocated, not verified, and not visible to the engineers who need to satisfy them.

The ISO/IEC 15288 lifecycle standard is explicit: derived requirements resulting from architectural decisions must be identified, documented, and allocated to system elements. The standard calls this requirements allocation and treats it as a formal systems engineering activity, not an informal side effect of design.

In practice, most programs capture derived requirements imperfectly. Architectural decisions happen in design reviews. Some derived requirements make it into a requirements management tool; many do not. The gap is not usually deliberate — it is structural. When the architecture lives in a slide deck and the requirements live in a separate database, connecting them is manual work that competes with other priorities.

How Modern Tools Implement Architectural Traceability

The core problem is representation. If architecture is a set of PowerPoint slides, it cannot be queried, cannot be automatically checked for consistency, and cannot be linked structurally to requirements. The slides represent the architecture, but they do not instantiate it in a way that supports engineering analysis.

Model-based systems engineering (MBSE) approaches address this by expressing architecture in formal models — SysML being the most widely used notation. SysML block definition diagrams capture the functional and physical hierarchy; internal block diagrams capture interconnect; state machines capture behavior; sequence diagrams capture dynamics. When the model is the authoritative source, queries become possible: which functions are allocated to this component? Which requirements trace to this interface?

The limitation of traditional MBSE tools is that the model and the requirements often still live in separate systems, linked by identifiers that must be maintained manually. Traceability reports are generated periodically rather than continuously, and they show what links exist at the time of generation — not a live view of what has changed since.

Flow Engineering takes a different structural approach. It treats the entire systems engineering artifact space — requirements, functions, components, interfaces, tests, decisions — as nodes and edges in a persistent graph. Architecture is not a separate document or a separate model file; it is a structured layer within that same graph. Functions are nodes. Physical elements are nodes. Allocation relationships are typed edges connecting them. Interface definitions are edges with properties.

This means that when an architectural decision creates a derived requirement, the derived requirement is not written in a separate document and manually linked — it is created as a node in the graph with typed relationships to the architectural decision that generated it, the system-level requirement it derives from, and the subsystem element it is allocated to. The traceability is structural, not documentary.

The practical consequence is bidirectional impact analysis. If a system-level requirement changes, Flow Engineering can surface every derived requirement downstream of it through the graph, including those generated by specific architectural decisions. If a component allocation changes, the tool can surface the derived requirements that may need to be re-examined. This is not a report generated from a separate query — it is a live property of the graph.

Flow Engineering’s deliberate scope is worth naming directly: it is built for systems and hardware engineering teams who want to move past document-based practices, and its depth is in the requirements-to-architecture-to-verification chain. Teams that need heavyweight process compliance tooling built around legacy document workflows — DO-178C tool qualification chains, for example, or AS9100-mandated document control — will find it is not designed to replace those systems wholesale. It is designed to be the engineering layer that makes the decisions visible and traceable while those processes run alongside.

Practical Starting Points

If your program does not have a formally maintained system architecture, the goal is not to immediately implement a full MBSE model. The goal is to make the decisions visible.

Start by identifying the major allocation decisions your program has already made. For each one, ask: what requirements did this decision generate? Are those requirements written down? Are they allocated to a responsible team?

Then address view coverage. Most programs have some form of physical architecture. The gaps are usually functional view (functions are implicit in the physical structure rather than explicitly defined) and behavioral view (timing and sequencing requirements are stated but not traced through a behavioral model).

For new programs, treat the functional view as the first architecture artifact, not the last. Define what the system must do before deciding how it will be structured physically. This sequencing is obvious in principle and violated constantly in practice, usually because hardware schedules demand physical architecture decisions before system-level functional analysis is complete. Name the tension explicitly and manage it as a program risk.

Finally, insist that derived requirements be captured with explicit linkage to the architectural decision that created them. The link does not need to be automated to be valuable. Even a column in a requirements spreadsheet that says “source: architectural decision ADR-047” is better than derived requirements that appear to have originated from nowhere.

What Architecture Is Actually For

A system architecture is not a deliverable. It is not a document type, a diagram, or a phase gate artifact. It is the authoritative description of how a system is structured to satisfy its requirements — the arrangement of elements, properties, and relationships that makes the requirements achievable.

When that description is maintained as a live, traceable artifact rather than a snapshot in a presentation file, it becomes a tool for engineering analysis: impact assessment, interface verification, requirement allocation confirmation, behavioral consistency checking. It earns its place in the program not because it is required by a standard, but because programs that maintain it rigorously make better decisions faster and find integration problems earlier.

That is the practical argument for taking the formal definition seriously.