Model-Based Systems Engineering Adoption: The Gap Between Commitment and Execution
The commitment is usually genuine. An engineering director attends a conference, or a major customer starts asking about digital threads, or a program post-mortem reveals that half the integration failures traced back to requirements that existed in three different Word documents with no clear owner. The organization resolves to do MBSE properly. A tool gets selected or evaluated. Training is scheduled. And then, somewhere between the kickoff meeting and the first real program deliverable, the initiative stalls.
This is not a rare story. It is, by most accounts, the modal outcome.
Recent practitioner surveys put the gap in stark terms. The 2025 INCOSE Systems Engineering Vision survey found that while over 70% of aerospace and defense organizations reported active MBSE initiatives, fewer than 30% described those initiatives as integrated into their primary program workflows. The gap between “we are doing MBSE” and “MBSE is how we work” has not closed meaningfully in five years. If anything, the proliferation of tooling options has made the commitment-to-execution gap wider, not narrower.
Understanding why requires looking past the surface explanations.
The Tool Selection Trap
Ask an engineer whose MBSE initiative stalled what went wrong, and the first answer is usually “tool selection.” The evaluation cycle stretched too long. Different stakeholders wanted different capabilities. The procurement process added six months. By the time a tool was licensed, the program it was supposed to support had moved past the phase where modeling would have had the most impact.
This is real, but it is a symptom, not the cause. The deeper problem is that tool selection in MBSE is genuinely difficult because the tools are not interchangeable, and the criteria for selection depend on answers that most organizations haven’t worked out yet.
IBM DOORS and DOORS Next remain dominant in defense contracting, largely for contractual compliance reasons rather than engineering merit. They are document-centric systems that can be configured to approximate model-like traceability, but the configuration burden is substantial and the resulting structures are fragile. Teams that have lived in DOORS for two decades often cannot articulate what a model would give them that a well-structured DOORS database doesn’t — which makes it nearly impossible to build internal support for the migration cost.
Jama Connect has made genuine progress on usability and has a strong following in medical devices and automotive. Its review and approval workflows are well-suited to regulated industries. But its underlying data model is still document-and-item oriented, and teams that come to it expecting graph-based traceability end up working around the tool rather than with it.
Cameo Systems Modeler (now Catia Magic) and Rhapsody are the serious SysML tools. They are powerful. They are also expensive, require significant training investment, and produce artifacts that look foreign to stakeholders who live in PowerPoint and Excel. The evaluation becomes a comparison between a tool that is familiar and inadequate versus a tool that is capable and alienating.
The organizations that navigate this successfully tend to make an early decision about what problem they are actually solving — requirements traceability, interface management, architecture exploration, verification planning — and select tooling against that specific problem. The organizations that fail tend to select tooling against a general capability checklist and then try to find the problem afterward.
The SysML Problem Is Real, But Misdiagnosed
SysML gets blamed frequently for MBSE failures, and the critique has some validity. The notation is dense. Block Definition Diagrams, Internal Block Diagrams, Parametric Diagrams, Activity Diagrams, Sequence Diagrams, Use Case Diagrams — a complete SysML model of a moderately complex system is not something a new practitioner can read fluently without months of practice. The cognitive load is high, and most systems engineers on active programs do not have months to spare.
But the failure mode is not usually “engineers tried to learn SysML and couldn’t.” It is “engineers were handed a SysML tool and asked to recreate documents they already had, in a notation that added no new insight.”
SysML is a modeling language, not a documentation format. Its value comes from the constraints and relationships it makes explicit — the forcing function of having to define what a block’s ports are before you can connect it to anything, the discipline of separating behavior from structure, the ability to propagate a parameter change through a model and see what breaks. None of that value is accessible if the team is using the tool to draw diagrams that describe what the Word document said.
This is the core of the learning curve problem. It is not syntactic. Engineers can learn the notation. The hard part is learning to think in models — to treat the model as the source of truth rather than an illustration of a truth that lives somewhere else.
Organizations that have made this transition successfully usually share one characteristic: they had a senior practitioner, internal or external, who had actually used a model to answer a question that couldn’t be answered from documents. That experience — seeing a model catch an interface conflict, or close a verification gap that three review cycles had missed — provides the motivation that no training curriculum can manufacture.
The Migration Problem Is Underestimated
For new programs starting from scratch, the MBSE on-ramp is hard but tractable. For programs already mid-execution with an existing document baseline, it is genuinely brutal.
The typical scenario: a program has a System Requirements Specification with 800 requirements, written over two years, reviewed and approved by the customer, living in a Word document or a legacy DOORS database. The organization wants to move to a model-based approach. The question immediately becomes: what does “moving” mean?
If it means importing the 800 requirements into a new tool, that is a data migration problem and it is solvable, if tedious. The resulting “model” will be a flat list of text strings with the same weaknesses the document had, now housed in more expensive software.
If it means restructuring those requirements into a proper model — decomposing system requirements to subsystem requirements, building allocation structures, defining verification methods, establishing interface requirements as first-class objects — that is months of engineering work on content that the program has already baselined. The customer may need to be re-engaged. The cost estimate does not include this work. The schedule does not accommodate it.
Most programs choose a third option, which is to run the document-based baseline in parallel with the new model-based approach, updating both, and hoping that over time the model becomes the system of record. This almost never works. The document baseline has contractual authority. It gets updated first. The model drifts. Within a year, the model is out of date and the team has stopped trusting it.
The organizations that have successfully migrated mid-program have generally done so by identifying a specific subsystem or interface — one that is actively causing problems — and modeling that subsystem comprehensively, using the model to resolve the problems, and demonstrating value before expanding scope. Selective, problem-driven migration rather than comprehensive, schedule-driven transformation.
What Separates Success From Failure
Across the practitioner accounts and survey data available, the pattern is consistent enough to describe with some confidence.
Successful MBSE adoptions start with a question the team cannot answer from existing documents. Not “we should be doing MBSE” but “we cannot close our verification matrix because we don’t know which test cases map to which requirements at the subsystem level, and we need to know by CDR.” The model exists to answer that question. Its value is immediate and specific. The team maintains it because they need it, not because the process requires it.
Failed adoptions start with a mandate. The organization has committed to MBSE. A tool has been purchased. All new programs will use it. Training will be provided. The model is built because it is required. Its value is theoretical and future-tense. Maintenance pressure evaporates as program schedules tighten.
This distinction sounds obvious in retrospect, but it runs against how organizations typically approach transformation initiatives. Enterprise-wide transformation mandates are how organizations signal strategic commitment. They are how budgets get allocated and tools get procured. The problem is that they set up exactly the wrong conditions for MBSE adoption, which requires bottom-up motivation from engineers who have seen models solve real problems.
The practical implication: successful MBSE adoption requires finding and amplifying early wins at the team level before scaling the mandate. The enterprise commitment provides resources and air cover. It cannot provide the motivation that comes from watching a model catch something the documents missed.
The Emerging Role of AI-Native Tooling
Over the past two years, a new category of tooling has started to address parts of this problem in ways that traditional MBSE tools have not.
The specific contribution is on the authoring and structuring side. One consistent barrier to MBSE adoption is the cost of getting requirements into a model-ready form — decomposed, attributed, linked, and free of the ambiguity that makes manual traceability unreliable. AI-native tools are beginning to automate significant parts of this work.
Flow Engineering, built specifically for hardware and systems engineering teams, represents this approach at the requirements layer. Rather than asking engineers to author requirements in SysML notation or learn a new modeling paradigm, it uses graph-based data structures and AI-assisted authoring to help teams build connected requirement sets from the start — with traceability as an architectural feature rather than a documentation task performed after the fact. For teams that have stalled on MBSE because the authoring cost was prohibitive, tools like this lower the on-ramp without asking engineers to abandon their existing working patterns entirely.
The honest caveat: AI-assisted requirements tooling is not a substitute for systems modeling. It addresses the requirements layer, not the behavior or architecture layers that full SysML modeling covers. Teams with complex interface management or parametric analysis needs will still require dedicated modeling environments. But for organizations whose MBSE initiative has stalled specifically because getting requirements into a usable form was too costly, this layer of tooling removes a real barrier.
The Honest Assessment
MBSE is not hype. The engineering case for model-based approaches — consistency, traceability, the ability to propagate changes and assess impact, the reduction of ambiguity through formal structure — is sound and empirically supported on programs where it has been applied rigorously.
The adoption gap is real, persistent, and not primarily a technology problem. The tools exist. The training exists. The industry standards and frameworks exist. What is missing, in most stalled initiatives, is the connection between the model and a problem the team actually needs to solve today.
The organizations that are closing the gap are doing so by resisting the enterprise transformation framing and instead building model-based capability through specific, high-value applications — one interface, one subsystem, one verification closure problem at a time. They are treating MBSE as an engineering practice to be grown from demonstrated value, not a methodology to be installed from the top down.
That is slower and less legible as a strategic initiative. It is also, based on the available evidence, the approach that actually works.