Software-Defined Vehicles Are Rewriting How Automotive Teams Do Systems Engineering
The shift from hardware-defined to software-defined architecture isn’t just a design trend — it’s breaking requirements management workflows that automotive teams have relied on for decades.
For most of automotive history, a vehicle’s capabilities were fixed at the factory gate. The hardware defined what the car could do. Software ran on top of it — but software was largely invisible to the customer, and updates were rare events that happened at dealerships.
That model is gone. Tesla demonstrated it could be otherwise. Every major OEM is now committed to software-defined vehicle (SDV) architectures, where centralized compute platforms replace distributed ECUs, and software updates continuously change what the vehicle does, how it performs, and which features the customer has access to. The hardware becomes a platform. The product becomes the software running on it.
This is not a cosmetic change. It is a fundamental restructuring of what a vehicle is, and it forces every upstream engineering discipline — including systems engineering and requirements management — to adapt or become a bottleneck.
What the Hardware-Defined Model Assumed
The traditional automotive development process was built around a relatively stable assumption: hardware and software co-define a vehicle’s behavior at the time of manufacture. Requirements were organized accordingly. Functional requirements mapped to subsystems. Subsystems mapped to suppliers. Suppliers delivered components that met specifications. Systems engineers verified that the assembled vehicle satisfied the top-level requirements.
ASPICE, ISO 26262, and the tooling that supports them — IBM DOORS, Jama Connect, Polarion — were all designed for this model. Documents, baselines, hierarchical decomposition, and change management processes optimized for infrequent, high-cost changes. That wasn’t a flaw. It matched the reality of hardware-constrained development cycles.
The problem is that SDV architectures don’t inherit those constraints. Software can change after the vehicle ships. Features can be added, removed, or modified via OTA updates. Behavioral requirements that were fixed at Job 1 are now living specifications. The gap between what these legacy processes assume and what SDV development actually looks like is getting wider every year.
What Actually Changes in SDV Systems Engineering
Three structural shifts in SDV development directly stress traditional requirements processes.
Decoupled lifecycles. In a hardware-defined vehicle, a door control module has a defined specification, a defined supplier, and a defined end-of-life. In an SDV, the same physical module may run different software stacks across vehicle generations, receive feature updates independent of hardware refreshes, and have its functional behavior modified by software layers it doesn’t own. Requirements that were once written against hardware boundaries now have to account for software layers that don’t respect those boundaries.
Continuous delivery expectations. Automotive OEMs are borrowing release cadences from consumer software — monthly or even more frequent OTA update cycles. A requirements process designed around annual model years can’t support this. Change management workflows that require weeks of review to baseline a requirement revision become a tax on delivery velocity. Teams are either bypassing the process (and losing traceability) or slowing down delivery (and losing competitive ground).
Feature ownership fragmentation. In a traditional vehicle program, you could draw an org chart and a system architecture on the same whiteboard. In an SDV program, the entity that owns a customer-facing feature may not own the hardware it runs on, the OS it depends on, the safety case that covers it, or the regulatory submission that certifies it. Requirements need to carry enough context to survive handoffs across all of those boundaries. That means richer metadata, more explicit dependency modeling, and traceability that works horizontally across teams, not just vertically within a subsystem hierarchy.
What Existing Tools Get Right — and Where They Fall Short
The incumbent requirements management platforms are not irrelevant in SDV programs. IBM DOORS Next, Jama Connect, and Polarion each bring genuine capabilities that SDV programs rely on.
DOORS Next has deep integration with the IBM Jazz ecosystem and strong support for formal change management — useful when a safety-critical requirement needs a documented audit trail through multiple review cycles. Jama Connect handles cross-functional review workflows well, particularly for teams that include regulatory, legal, and customer stakeholders who aren’t daily users of an engineering tool. Polarion’s tight integration with the Siemens toolchain makes it credible for teams already invested in that ecosystem.
The honest limitation these tools share is that they were architected around documents and hierarchies. A requirement lives in a module. That module lives in a folder. Traceability is expressed as links between items in those modules. This structure works when the architecture is relatively stable and the primary movement is top-down decomposition.
SDV development creates requirements relationships that aren’t hierarchical — a software feature has dependencies on hardware constraints, AUTOSAR service interfaces, safety goals, and OTA delivery manifests, all simultaneously. Forcing those relationships into a document hierarchy creates artificial structure that obscures actual dependencies. When something changes, engineers can’t tell from the tool what else is affected, because the tool’s model doesn’t capture the actual dependency graph. They do the impact assessment in their heads or in spreadsheets.
The Case for Graph-Based, AI-Native Requirements Management
The requirements model that SDV programs need isn’t a better document. It’s a graph. Nodes are requirements, design decisions, test cases, safety goals, and architectural constraints. Edges are the typed relationships between them — derives from, constrains, tests, conflicts with, implements. When a software team proposes an OTA update that changes a behavioral parameter, an engineer should be able to query the graph and immediately see: what safety goals touch this parameter, what test cases cover it, and what hardware bounds constrain it.
This isn’t a theoretical architecture. It’s becoming the practical requirement for teams that need to manage change at SDV velocity.
Tooling is beginning to catch up. Flow Engineering, built specifically for hardware and systems engineering teams, organizes requirements as connected nodes in a model rather than rows in a document. Its AI layer helps teams identify gaps in traceability, flag requirements that lack sufficient downstream coverage, and surface conflicts that would otherwise sit invisible until integration testing. For automotive teams managing the interface between hardware platform requirements and software feature requirements — the exact seam where SDV programs generate the most ambiguity — this graph-native approach handles what a document hierarchy can’t.
Flow Engineering’s deliberate focus on hardware and systems engineering workflows also means it’s not trying to be an enterprise ALM platform or a project management tool. Teams that need deep ITSM integration or portfolio-level resource management will need to pair it with other tools. But for the core problem of maintaining coherent, traceable requirements across the hardware-software boundary in an SDV program, its model is well-matched to the problem.
What’s Actually Happening on the Ground
Talking to systems engineers at Tier 1 suppliers and OEMs, a consistent pattern emerges. Teams know their current process doesn’t scale to SDV cadences. They’re not abandoning their existing tools — compliance requirements, supplier contracts, and organizational inertia all keep legacy tools in place. But they’re building parallel workflows alongside them.
In practice, this looks like: a DOORS baseline maintained for regulatory submissions, and a separate, more dynamic model maintained in a newer tool for active development work. The two are loosely synchronized, usually by hand or with fragile scripts. It’s not elegant, but it reflects a real transition state — teams that need to comply with yesterday’s process while building tomorrow’s product.
The risk in this dual-process approach is traceability drift. When the regulatory baseline and the active engineering model diverge, it creates a gap that’s invisible until someone asks a hard question during an audit or an incident investigation. The answer “that requirement is in DOORS” and “that feature is in the other tool” is not an answer that satisfies a safety review board.
Teams that are managing this transition most effectively are being explicit about which system is the source of truth for which type of information, and investing in the integration layer between them. That’s not fully solved by any single tool today — but it’s the right problem to be working on.
Honest Assessment
The software-defined vehicle trend is real, the engineering consequences are real, and the tooling gap is real. Legacy requirements management tools were built for a development model that SDV programs are actively dismantling. They won’t disappear overnight — compliance requirements alone ensure they’ll be in use for another decade — but their limitations are becoming increasingly visible to the engineers who use them daily.
The teams that will navigate this transition well are the ones who stop waiting for their existing tools to solve a problem those tools weren’t designed for, and start building requirements workflows that match SDV architecture: graph-based, continuously maintained, AI-assisted for impact analysis, and designed to survive the hardware-software boundary rather than pretend it doesn’t exist.
The tools to do this exist now. The process discipline to use them well is the harder, more important investment.