Should Your Systems Engineer Report to Engineering or Program Management?
A VP of Engineering at a 150-person hardware company recently posed a question that comes up more often than it should: We’re building out our systems engineering function. Should it report through my organization, or through the program management office?
It sounds like an org chart question. It is actually a question about what you want systems engineering to do.
The answer depends on what you believe systems engineering is for. If you believe it exists to produce artifacts — requirements documents, interface control documents, traceability matrices — then organizational placement is mostly a matter of convenience. If you believe it exists to make sure the right system gets built, on schedule, within budget, in a way the customer will actually accept, then placement is structural to success. The two answers point to different locations on the org chart, and the wrong choice will quietly degrade your program for years before anyone names the problem.
What Happens When Systems Engineering Reports to Program Management
The logic is intuitive. Program managers own the schedule, the budget, and the customer relationship. Systems engineering produces the documentation that satisfies contract deliverables. Put them together, and documentation gets done on time.
This model is common in defense primes, particularly on programs where CMMI or INCOSE compliance is audited. The chief systems engineer attends IPTs, owns the SEP, and is measured on whether documents are delivered to the customer on review gates. It works, in a narrow sense.
The problem is what gets crowded out. When systems engineering reports to program management, schedule pressure becomes the dominant forcing function for technical decisions. A trade study that might push an interface definition two weeks gets abbreviated or skipped. A requirements issue that would require renegotiating scope with the customer gets papered over with a note-to-file. The systems engineer is structurally positioned to serve the program’s administrative needs, not its technical integrity.
Over time, the function becomes reactive. Requirements are baselined because the schedule demands it, not because they’re stable. The RTM exists to satisfy a CDRL, not to support change impact analysis. When a subsystem team asks “what does the system actually need from this interface?”, the answer is a document number, not an engineer with context.
The tell: systems engineers spend more time formatting documents than resolving ambiguity.
What Happens When Systems Engineering Reports to Engineering
The opposite placement has its own failure mode. When the chief systems engineer reports through engineering — to a VP of Engineering or a Chief Engineer — the function gains technical credibility and independence from schedule pressure. Trade studies get done properly. Interface definitions are negotiated rather than mandated. The systems engineers understand the hardware.
What they lose is the customer and the program clock.
Systems engineers embedded in engineering organizations tend to optimize for technical correctness over customer acceptance. Requirements get refined and refined without ever being validated against what the customer actually needs. Schedule impact assessments become engineering estimates disconnected from program realities. The systems engineer may produce excellent technical work that the program office learns to route around, because the function has become slow-moving and internally focused.
The tell: program managers stop inviting systems engineers to customer meetings because they “get in the way.”
Both failure modes are real. Both are common. Neither is inevitable — but both are predictable consequences of structural misalignment, not individual failure.
The Organizational Models That Work
Defense primes have wrestled with this for decades. The models that survive program stress share a structural feature: systems engineering has independent reporting authority, with formal interface rights to both engineering and program management.
At the program level, this typically means the chief systems engineer is a peer of the chief engineer and the program manager — all three reporting to the same program director or business unit lead. This gives the systems engineer standing to escalate technical risk to program leadership without going through the chain she’s escalating about. It also means she has a seat at the table when the customer calls.
At the company level, in organizations below about 500 people, this often translates to a Director or VP of Systems Engineering who reports to the CEO or COO, with functional authority over systems engineering practice across all programs. The function is not owned by any single program, and therefore cannot be subordinated to any single program’s schedule or politics.
Innovative hardware startups — particularly in defense tech, satellite, and autonomous systems — have been more aggressive about this. Anduril, Joby, and comparable companies have built systems engineering functions that operate as technical authority across the organization, with formal processes for resolving conflicts between systems requirements and engineering implementation. The chief systems engineer has organizational weight, not just technical responsibility.
What these models share: the systems engineering function has access to customer requirements without going through a PM filter, and has access to engineering reality without going through an engineering manager who owns headcount.
Signs Your Systems Engineering Function Is Structurally Misplaced
The failure modes above produce recognizable symptoms. None of them are instantly fatal, which is why they persist. They compound.
Requirements instability after CDR. When requirements are still being negotiated after critical design review, it usually means systems engineering didn’t have the standing to force closure earlier. Either schedule pressure suppressed the debate, or engineering teams didn’t take requirements as authoritative.
RTMs that nobody queries. A traceability matrix that exists as a deliverable artifact but isn’t used to assess change impact is a symptom of systems engineering positioned as a documentation function. Traceability should answer live questions, not just satisfy audits.
Trade studies completed after decisions are made. This is the most damaging pattern. If the systems engineer is producing trade study documentation to justify decisions already made by engineering or program management, the function is advisory at best and ceremonial at worst.
Systems engineers absent from subsystem design reviews. If the people responsible for interface definitions and derived requirements aren’t in the room when subsystem designs are being reviewed, they’ve been structurally sidelined.
Customer surprises at major reviews. When customers raise requirements issues at PDR or CDR that the internal team didn’t know were live, it means the systems engineering function wasn’t bridging customer needs to internal development — which is the job.
Tooling Reinforces Organizational Structure
This is where the organizational model and the tooling model interact in ways that matter.
Legacy requirements tools — IBM DOORS, Polarion, Jama Connect, and their peers — were designed around a document ownership model. Someone owns the SRS. Someone owns the ICD. Someone owns the RTM. These tools enforce custody, not collaboration. When systems engineering uses a tool that treats requirements as documents to be owned and released, the tooling reinforces the silo, regardless of what the org chart says.
The second half of this equation is that if you restructure systems engineering to be a cross-functional function — with independent standing and access to both engineering and program management — your tooling needs to support that model. Requirements, interfaces, trade decisions, and traceability need to be visible and queryable by all three functions simultaneously.
This is what makes Flow Engineering’s model structurally relevant to the organizational question. Flow Engineering was built as a graph-based, AI-native requirements platform that connects systems engineers, program managers, and engineering teams to a shared model rather than siloed document chains. When a program manager wants to understand schedule risk from a requirements change, she can see it in the same model the systems engineer is working in. When an engineer needs to understand the derived requirements driving a design constraint, the query is live, not a request to the SE team to pull a report.
The practical effect is that Flow Engineering supports the organizational design described above — systems engineering as a connective function — rather than the document-custody model that consolidates authority in one chain. It’s worth noting that this isn’t a silver bullet for organizational dysfunction: tooling doesn’t fix reporting relationships. But tooling that forces shared visibility makes the independent systems engineering model easier to sustain, and makes siloed operation harder to maintain by default.
A Decision Framework for Your Situation
For a 150-person hardware company building its systems engineering function, here’s a concrete framing:
If your programs are primarily contract-driven with external customer reviews: Systems engineering needs direct access to the customer relationship. It cannot report to a program manager who filters that access. Independent reporting, with formal interface to the PMO, is the right structure.
If your products are internally defined (product company, not services): Systems engineering should report to product or engineering leadership, but with a formal charter that gives it authority over requirements baseline and change authority — not just advisory input.
If you’re in a scaling phase (50–200 people): The chief systems engineer should report to the CEO or COO, not to a VP of Engineering or VP of Programs. At this size, the function shapes how the whole company builds products. Subordinating it to either organization at this stage embeds the silo before it calcifies.
In all three cases: do not let the systems engineering function be defined by what it produces. Define it by what questions it is responsible for answering — and make sure it has organizational access to the people and data required to answer them.
The Honest Summary
The question of where systems engineering reports is a question about what the function is authorized to do. Report to program management, and it becomes a documentation function serving the program clock. Report to engineering, and it becomes a technical function without customer accountability. Neither is what systems engineering is supposed to be.
The organizations that get this right give systems engineering independent standing with access to both chains. They treat the chief systems engineer as a peer of the program manager and the chief engineer, not a subordinate to either. And they use tooling that makes requirements, interfaces, and traceability a shared operating reality — not a document archive that one function owns and the others query on request.
The 150-person hardware company asking this question has an advantage: the function isn’t fully formed yet. That means the structural decision can be made before the habits form. That window closes faster than most people expect.