What Is ASPICE? Automotive SPICE Explained for Systems Engineers
Automotive SPICE — formally published as ASPICE — is a process assessment model used by automakers and their Tier 1 suppliers to evaluate how capable and disciplined a software development organization is. It does not certify products. It does not guarantee safety. What it does is provide a structured, auditable answer to the question: does this supplier actually engineer software the way they say they do?
That question matters because modern vehicles contain upwards of 150 million lines of software across dozens of ECUs, and the OEMs writing the checks have learned the hard way that good intentions do not substitute for good process. ASPICE exists to make process quality visible, comparable, and contractually enforceable.
The Foundation: ISO/IEC 33020
ASPICE is built on top of ISO/IEC 33020, the international standard that defines the measurement framework for process capability. ISO/IEC 33020 specifies how to construct a rating scale, how to define process attributes, and how to conduct a valid assessment. ASPICE is, in ISO terminology, a Process Assessment Model (PAM) — it provides the process reference content (what processes should exist and what they should accomplish) while ISO/IEC 33020 provides the measurement machinery (how to score them).
The practical consequence is that an ASPICE assessment produces numerically comparable scores across different suppliers, different programs, and different assessment firms. A Level 2 rating from one accredited assessor should mean the same thing as a Level 2 rating from another. That comparability is why OEMs can write Level 2 requirements into supplier contracts as a hard gate.
The current release is ASPICE version 4.0, published by the Automotive Special Interest Group (Automotive SIG) under the umbrella of the SPICE User Group. Version 4.0 aligns process architecture more closely with ISO/IEC/IEEE 15288 and 12207, which has practical consequences for how system and software processes interlock.
The Process Reference Model
ASPICE organizes development activities into a Process Reference Model (PRM) structured around three primary process groups.
System Engineering processes cover the top-level work: eliciting and managing system requirements, defining system architecture, integrating system components, and conducting system qualification testing.
Software Engineering processes cover the implementation layer: software requirements analysis, software architectural design, software detailed design and unit construction, software unit verification, software integration and integration testing, and software qualification testing.
Supporting processes cover the horizontal activities that give the engineering work integrity: configuration management, change request management, problem resolution management, quality assurance, and measurement.
Each process is defined by a purpose statement and a set of outcomes — the observable results that should exist if the process is being performed. These outcomes are what assessors look for evidence of. A software requirements analysis process, for example, has outcomes including that software requirements have been developed, are consistent with system requirements, are testable, and have attributes assigned to them.
Capability Levels: 0 Through 5
Every assessed process receives a capability level score. The levels are defined by ISO/IEC 33020 and apply uniformly across all process areas.
Level 0 — Incomplete. The process is not performed, or fails to achieve its purpose. Outcomes are absent or largely missing.
Level 1 — Performed. The process achieves its purpose. The basic work gets done. There is no requirement at this level for planning, tracking, or management discipline — just observable outcomes.
Level 2 — Managed. This is where ASPICE separates process execution from process management. A Level 2 process is planned, monitored, adjusted, and produces work products that are controlled. Specifically, Level 2 requires demonstrating two process attributes: Performance Management (the process is planned and tracked against that plan) and Work Product Management (work products are identified, controlled, reviewed, and subject to change management). Most of the contractual weight in the automotive supply chain sits at this level.
Level 3 — Established. The process is implemented using a defined, organization-standard process tailored appropriately for the project. Level 3 shifts from project-level management to organizational-level process definition. Assessors look for documented standard processes, tailoring guidelines, and evidence of use.
Level 4 — Predictable. The process operates within defined limits using quantitative data. Process performance is measured and managed statistically. Few suppliers pursue Level 4 broadly; it applies meaningfully in high-volume, safety-critical contexts.
Level 5 — Optimizing. The process is continuously improved through quantitative targets and systematic innovation. Level 5 is rare in practice and not a contract requirement in any standard automotive program.
The scoring rules under ISO/IEC 33020 require that lower levels be substantially or fully achieved before higher levels can be rated. A process cannot be scored Level 2 if Level 1 is not fully in place.
The Processes Most Commonly Assessed
While ASPICE covers a broad range of processes, supplier assessments focus most heavily on the development-critical areas.
Requirements Engineering (SYS.2 / SWE.1). System requirements analysis and software requirements analysis are almost always in scope. Assessors examine whether requirements are formally elicited, documented with attributes (rationale, owner, priority, status), traced bidirectionally to parent requirements and to test cases, and managed under change control. This process area consistently generates the most findings at Level 2 because many teams write requirements but do not manage them — there is no change history, no impact analysis, and traceability exists only in a static document rather than a living model.
System and Software Architecture (SYS.3 / SWE.2). Assessors look for design decisions that are traceable to requirements and for evidence that architectural alternatives were considered. Interface specifications and their traceability to requirements are a common evidence gap.
Software Implementation (SWE.3). Unit construction and verification practices are examined. Assessors ask for coding standards, static analysis results, peer review records, and evidence that code is traceable to design.
Testing (SWE.4, SWE.5, SWE.6, SYS.5). Integration testing, qualification testing, and system testing are all assessed. The critical evidence is that test cases derive from requirements and that test results are recorded, reviewed, and linked back to requirements coverage.
Supporting Processes. Configuration management and change request management are frequently the weakest areas in first-time assessments. Many development teams treat version control as a technical tool rather than a process with formally managed baselines, and they lack any formal mechanism for tracking change requests from submission through analysis, implementation, and verification.
How ASPICE Assessments Work in Practice
An ASPICE assessment is not a certification exam. It is a structured evidence review conducted by one or more trained assessors, typically over two to five days on-site (or split between document review and on-site interviews).
Before the assessment, the supplier identifies the scope — which project, which processes, which life cycle phases are in scope — and prepares a document package. This package typically includes requirements specifications, architecture documents, test plans and results, review records, configuration management logs, and change request records.
During the assessment, assessors conduct structured interviews with engineers, project managers, and quality leads. They cross-reference what people describe with the documentary evidence. A common failure mode is the “we do that but didn’t write it down” response — assessors cannot score what cannot be evidenced.
Each process attribute is rated on a four-point scale: Not Achieved (0–15%), Partially Achieved (15–50%), Largely Achieved (50–85%), Fully Achieved (85–100%). The capability level assigned to a process reflects whether the required attributes at each level are largely or fully achieved.
The assessment produces a Process Attribute Rating Summary and a findings report. Findings are classified as strengths, weaknesses, or improvement opportunities. Automotive OEMs typically receive the results directly or require that the supplier share them as part of supplier qualification.
Why Level 2 Is the Practical Minimum
Most automotive OEMs — Volkswagen Group, BMW, Mercedes-Benz, Stellantis, and their supply chain standards bodies — require Level 2 across the core development processes as a contractual qualification threshold. Several OEMs require it for software-relevant ECU projects; some require it for all development work above a complexity threshold.
The reason Level 2 is the threshold and not Level 1 is precisely the Performance Management and Work Product Management attributes. A supplier at Level 1 might produce good requirements — once, for this project, because the right engineer happened to be involved. A supplier at Level 2 has a process: requirements are planned, baselined, reviewed, changed only through a controlled mechanism, and traceable in a way an external party can verify.
That distinction matters enormously over a five-year vehicle program where requirements change hundreds of times, engineering teams turn over, and something needs to go wrong before anyone remembers what the original intent was.
How Modern Tooling Supports Level 2 Compliance
Reaching Level 2 in requirements engineering (SWE.1 and SYS.2) requires producing specific, auditable evidence: that requirements exist in a controlled state, that changes to them are tracked and managed, and that traceability to upstream and downstream artifacts is maintained continuously rather than reconstructed at assessment time.
This is where tooling makes a concrete difference. Teams that manage requirements in Word documents or spreadsheets consistently struggle at Level 2 assessments because they cannot demonstrate that their artifacts are under configuration control, that changes have a documented history, or that traceability is current rather than manually refreshed for the audit.
Tools like Flow Engineering address this directly. Flow Engineering is an AI-native requirements management platform built for systems and hardware engineering teams, and its architecture maps naturally to the evidence ASPICE Level 2 demands. Requirements are maintained in a structured, versioned model rather than as document prose, so change history is inherent rather than manufactured. Bidirectional traceability — from system-level requirements down through software requirements to test cases — is maintained as a live graph, not as a static matrix exported for review.
For teams approaching a first ASPICE assessment or working to close findings from a previous one, Flow Engineering’s traceability model gives assessors the kind of evidence they can actually score: not a spreadsheet claiming coverage, but a queryable model where every link has an author, a timestamp, and a state. The change management workflow — where requirement changes flow through a formal process with impact visibility — satisfies the Work Product Management attribute without requiring a separate change control overlay on top of a document editor.
Flow Engineering is focused on requirements and systems engineering specifically, which means it does not cover the full ASPICE process scope. Configuration management for software code, problem resolution, and quality assurance records typically require additional tooling. The deliberate specialization means the requirements-related evidence it produces is well-structured, but teams should not expect it to serve as an all-in-one ASPICE compliance platform.
Practical Starting Points
If your organization is preparing for an ASPICE assessment — or has received findings from one — the most productive initial steps are:
Audit your requirements artifacts first. Requirements engineering findings are the most common and most contractually consequential. Identify whether your current artifacts are under version control, whether changes have a managed history, and whether traceability to test cases is current.
Separate process existence from process evidence. Many teams perform the right activities but document them inadequately. Before adding new process steps, determine what evidence already exists and what is missing.
Treat supporting processes as load-bearing. Change request management and configuration management fail more Level 2 assessments than requirements quality does. These processes need explicit owners, explicit records, and explicit links to the development artifacts they control.
Start the tool conversation early. Migrating from document-based requirements management to a structured tool mid-assessment preparation is possible but costly. Evaluating tooling at the start of a program, or immediately after a first-round finding, gives teams time to populate and validate the model before evidence review.
ASPICE is demanding by design. The automotive industry built it because the cost of process failure in embedded software — in terms of recalls, liability, and safety incidents — is high enough to justify the investment in structured process management. Understanding what the model actually requires, and what assessors are actually looking for, is the first step to building practices that hold up under scrutiny.