Should Your Systems Engineer Also Be Your Requirements Tool Administrator?
The question comes up constantly in smaller hardware and systems engineering teams: you’ve licensed a requirements management tool, someone needs to configure it and keep it running, and the most technically capable person available is your systems engineer. So they get the job. Two roles, one headcount, problem solved.
Except the problem isn’t solved. It’s deferred.
This arrangement works until the project demands more of both roles simultaneously — which is exactly when projects get hard. Here’s a clear-eyed breakdown of what each role actually involves, what happens when one person absorbs both, and where the real leverage is.
What Systems Engineering Work Actually Requires
A systems engineer’s primary job is to reason about a complex system: decomposing stakeholder needs into functional requirements, allocating requirements to subsystems, managing change, identifying conflicts, and ensuring traceability from need through verification. At a detailed level, this means writing requirements with precision, reviewing them for completeness and testability, negotiating between subsystems when an interface is ambiguous, and building models that let the team understand the system’s behavior before hardware exists.
None of that is configuration work. All of it requires sustained technical judgment. The cognitive load is high, the context-switching cost is real, and interruptions to go fix a broken integration or update a database schema are not free.
Systems engineers are also the people responsible for communicating requirements to other engineers, catching problems early, and holding the traceability chain together across a program’s lifecycle. When that person is context-switching to do IT work, the traceability chain has a single point of failure — and it’s the person who is supposed to be preventing failures.
What Tool Administration Actually Involves
This is where organizations often undercount the work. Requirements tool administration — especially on legacy platforms — is a substantive technical job that includes:
Schema design and maintenance. Defining attribute structures, requirement types, enumeration values, and data models. These decisions have downstream consequences. A poorly designed schema that gets changed mid-program can break existing reports, invalidate baselines, and corrupt traceability links. Doing this well requires understanding both the engineering workflow and the tool’s data model.
Integration maintenance. Most programs need their requirements tool connected to something: a PLM system, a test management tool, a JIRA instance, a CI pipeline. These integrations break. APIs change, credentials expire, data formats drift. Someone has to own that. In DOORS environments, this often means maintaining custom DXL scripts — a specialized scripting language that most systems engineers neither know nor want to learn.
User and access management. Controlling who can read, modify, baseline, and export requirements. Getting this wrong has compliance consequences on regulated programs. Managing it properly requires understanding both the org chart and the tool’s permission model.
Baseline and configuration management. Creating baselines at the right points, ensuring they’re complete and immutable, managing branching when parallel development tracks diverge. This is tool-specific work with significant program impact if done wrong.
Migration and upgrades. When the vendor releases a new version, someone has to test it, plan the migration, and execute it without losing data or breaking links. This is a project in itself.
Training and onboarding. When new engineers join the program, someone has to teach them how the tool is configured and what the conventions are. In complex DOORS deployments, this alone is a multi-day effort.
In organizations with mature, well-funded programs, these responsibilities belong to a dedicated tool administrator or a systems engineering tooling team. That’s not bureaucratic overhead — it’s appropriate allocation of specialized work.
What Happens When One Person Does Both
There are two failure modes, and both are common.
The engineering work suffers. When the systems engineer gets pulled into tooling problems — a broken link report before a CDR, a permissions issue blocking a supplier, a schema change request from the PM — the engineering work gets deprioritized. Requirements reviews get rushed. Traceability gaps go unnoticed. The engineer who is supposed to be catching problems is instead fixing a DXL script.
The tool configuration suffers. Alternatively, the systems engineer treats administration as a distraction and does the minimum to keep things running. Schemas accumulate technical debt. Integrations are held together with workarounds. Baselines are created inconsistently. When the program scales up or a new team member joins, the tool becomes an obstacle rather than an asset.
In practice, both things happen simultaneously in different areas. The result is a requirements environment that no one trusts and that doesn’t reflect the actual state of the program. Engineers work around it with spreadsheets and email, which defeats the entire purpose of having a requirements tool.
What Dedicated Tool Administration Looks Like
In organizations where this is done right, the tool administrator role looks like this:
- They’re embedded in the systems engineering workflow but not doing systems engineering themselves. They understand the program well enough to make good schema decisions and configure traceability correctly, but they’re not writing requirements or owning allocations.
- They own all integrations and are accountable when they break. Systems engineers file a ticket; the admin fixes it.
- They control schema changes and process them as formal requests. An SE can’t unilaterally add an attribute and break every existing report.
- They run the baseline process at gates. They know the tool’s configuration management capability deeply and apply it consistently.
- They’re the first point of contact for onboarding. New engineers learn the tool from them, not from watching a colleague muddle through it.
This is a full-time role on a large program. On a medium program, it might be a 50% role shared across two programs. On a small program, it might be a part-time function covered by someone who has deep tool expertise and minimal other obligations.
What it is not, in any of these cases, is a collateral duty stacked on top of a full systems engineering workload.
Where Modern Tools Change the Calculation
This is where the conversation becomes more nuanced. Legacy tools like IBM DOORS were designed in an era when flexibility meant complexity — every deployment was highly customized, and that customization required expert management. The tool was powerful but deliberately open-ended, which meant someone had to close it. Every DOORS deployment is essentially a custom application built on top of a database, and custom applications require maintenance.
Modern, AI-native tools are built on different assumptions. The schema isn’t a blank slate that needs to be designed from scratch — the tool has an opinionated structure that works for most systems engineering workflows out of the box. Integrations are managed through standardized connectors rather than custom scripts. Access control follows role-based patterns that don’t require per-deployment customization. The AI functionality surfaces traceability issues and requirement quality problems automatically, rather than requiring someone to build custom queries.
Flow Engineering, for example, is built on this premise. The tool is explicitly designed so that a systems engineer can be productive without a dedicated administrator behind them. The graph-based model it uses for traceability doesn’t require schema configuration — the structure is inherent to the way requirements, functions, and verification artifacts relate to each other. Engineers can set up a project, import existing requirements, and begin managing traceability without touching any configuration layer.
That doesn’t mean administration disappears entirely. Someone still needs to own integrations with external systems, manage access for a large team, and think deliberately about how the tool is being used across a program. But the surface area is dramatically smaller, and the expertise required to manage it is much closer to what a technically capable engineer already has.
The practical implication: on a small to medium-sized program using a modern tool, a systems engineer can plausibly own both the engineering work and the light administration required without either suffering significantly. On a large program or in an organization running legacy tools, that calculation doesn’t hold.
A Decision Framework
Ask these questions honestly:
How much configuration does your current tool require to function correctly? If the answer involves custom scripting, manual schema updates, or integration maintenance that requires tool-specific expertise, you need dedicated administration support.
How much time is your systems engineer currently spending on tooling versus engineering? If it’s more than a few hours per week, you’re already paying the hidden tax — you just haven’t made it visible.
What happens when the person who knows the tool configuration leaves? If the answer is “we’re in serious trouble,” that’s institutional knowledge that should belong to a role, not a person.
Is your tool actually being used correctly, or are people working around it? Engineers who don’t trust the tool will export to Excel and manage requirements informally. That’s a signal that administration has fallen short, regardless of who is nominally responsible for it.
The Honest Answer
Your systems engineer should not also be your requirements tool administrator if your tool requires significant, ongoing configuration work to function correctly. If it does, you are either underinvesting in the administration role or you are using the wrong tool for your team’s size and capacity.
The two roles are not incompatible by nature — they’re incompatible when the tool imposes enough administrative overhead that doing both well is genuinely impossible. Choosing tools that minimize that overhead is one of the most consequential decisions a systems engineering team makes, and it’s one that often gets made on feature checklists rather than total cost of ownership.
The best requirements environment is one where systems engineers can focus entirely on systems engineering, the tool does what it’s supposed to do without constant intervention, and the administrative layer is thin enough that it doesn’t require a dedicated expert to sustain. That’s achievable. It just requires being honest about whether your current setup actually delivers it.