What Threshold and Objective Mean
A requirement that says “the system shall achieve a range of 300 km” carries exactly one number. If the design comes in at 290 km during detailed design, the program faces a choice: grant a waiver, rebaseline the requirement, or descope something else. Every option costs schedule.
A requirement written with a threshold of 250 km and an objective of 350 km changes the calculus entirely. The design team knows that 250 km is the floor — anything below that is a contract failure. Anything above 250 km is acceptable. Anything that approaches 350 km is the preferred outcome, worth pursuing if the design budget allows. The program manager, the customer, and the systems engineer are all working from the same explicit trade space rather than haggling over a single number that was always an approximation anyway.
This is the threshold/objective distinction: threshold is the minimum acceptable value, the point below which the system fails to meet the requirement. Objective is the desired value, the performance level the customer actually wants and will reward in source selection or award fee. Both values are formally part of the requirement. Neither is aspirational margin held in someone’s notebook.
The construct is standard in U.S. Department of Defense acquisition and NASA programs. It appears in RFPs under the Key Performance Parameters (KPP) structure, in Systems Requirements Reviews (SRR) artifacts, and in contractor System Specifications. It is also used, under slightly different terminology, in ESA programs (minimum and goal values) and in commercial space programs that have adopted defense acquisition discipline.
Core Concepts
The Requirement Is Not a Single Number
The most important conceptual shift is recognizing that a requirement with threshold and objective values is still one requirement, not two. The threshold defines acceptability. The objective defines optimization direction. The requirement is satisfied when the threshold is met; it is met with excellence when the objective is approached or exceeded.
This structure formally encodes what every experienced engineer already knows: customers state performance goals, but those goals contain embedded margin, negotiated conservatism, and program-specific aspiration. The threshold/objective format makes that structure explicit and contractually visible rather than leaving it to informal interpretation.
In RFP language, you will typically see the format presented in a Key Performance Parameter table:
| Parameter | Threshold | Objective |
|---|---|---|
| Maximum Range | 250 km | 350 km |
| Time to First Fix | 45 sec | 15 sec |
| Mean Time Between Failures | 500 hr | 1200 hr |
Offerors proposing above threshold on all KPPs are technically compliant. Offerors approaching or meeting objective values on discriminating parameters score higher in source selection. The trade space between threshold and objective is where competitive differentiation happens.
Flow-Down and the Risk of Value Collapse
When a prime contractor receives an RFP with threshold/objective KPPs, the values must flow down into lower-level requirements — eventually reaching subsystem and component specifications. This is where the distinction most often gets lost.
The failure mode is predictable. A systems engineer derives a link budget, allocates a threshold of 28 dBm and an objective of 35 dBm to the transmitter subsystem, and hands a specification to the RF subcontractor. The subcontractor’s template has a single “shall” statement. The systems engineer writes “shall achieve a minimum transmit power of 28 dBm.” The objective disappears. The subcontractor designs to 29 dBm, closes the threshold, and considers the requirement met. The prime contractor is now stuck at the threshold end of the trade space for a parameter that was supposed to be a discriminator.
Proper flow-down preserves the pair. The subcontractor specification should read: “The transmitter shall achieve a minimum output power of 28 dBm (threshold) and is desired to achieve 35 dBm (objective).” That sentence changes how the subcontractor allocates design margin. It also changes how the prime evaluates competing subsystem proposals.
Design Reviews as Threshold/Objective Checkpoints
Program design reviews — SRR, PDR, CDR — each have a different relationship to the threshold/objective structure.
At SRR, the threshold and objective values should be established and baselined. If the customer has not specified objectives, the contractor’s systems engineering team should propose them. Leaving objective values undefined at SRR means the program will spend the next eighteen months with no formal optimization target.
At PDR, the system architecture should demonstrate a credible path to meeting thresholds, with analysis (not just assertion) showing which objective values are achievable within the design point. The PDR exit criterion is not “we will meet threshold” — it is “we understand the trade space between threshold and objective and have made explicit design choices about where in that space we expect to land.”
At CDR, the design should be mature enough to predict final performance within a defensible uncertainty band. If the design predicts performance between threshold and objective, that is a successful CDR outcome — it is not a deficiency requiring corrective action. If the design predicts performance below threshold, that is a CDR action item. Confusing these two situations — treating a threshold/objective gap as a CDR finding — is a common and expensive mistake.
Practical Implications for Verification Planning
Threshold and objective values require different verification strategies, and failing to plan for both creates problems late in the program.
Threshold verification is typically contractually mandatory. The test or analysis that demonstrates threshold performance must be in the TEMP (Test and Evaluation Master Plan) or equivalent verification document. If you cannot demonstrate threshold, the system does not meet the requirement.
Objective verification is often desirable but not always contractually mandated at the same rigor level. The program should still plan to measure objective-relevant performance where practical — because that data feeds design feedback, award fee evaluation, and future system upgrades. But the verification approach may be different. A threshold demonstration might require a formal qualification test with full instrumentation. An objective demonstration might be captured through operational test data or analysis against test results.
The risk of writing a single-number requirement and then discovering late that the customer expected objective performance is particularly damaging in verification. If the contract only specifies threshold values but the customer grades award fee against objective proximity, the program needs to know that before the test campaign, not after. This requires the threshold/objective distinction to survive all the way through the verification planning process, not just the requirements database.
When test resources are constrained — range availability, unit count, thermal-vac chamber time — programs routinely face the choice of testing to threshold at the required confidence level versus testing to objective at lower statistical confidence. Without explicit threshold and objective values in the verification plan, that trade cannot be made rationally.
Risks of Conflating Threshold and Objective
The damage from collapsing threshold and objective into a single requirement value flows in both directions.
Setting the requirement at the objective creates the waiver problem described at the outset. The design team has no formal range within which to operate. Every performance shortfall against a single number triggers a requirements change, a waiver request, or both. Programs that consistently experience high rates of requirements change during detailed design often have this root cause: requirements were written at objective performance levels, which are by definition aspirational, without a threshold that the design can reliably close.
Setting the requirement at the threshold destroys competitive incentive. If the contractor’s only obligation is to meet minimum acceptable performance, there is no contractual mechanism to reward better performance or to distinguish between proposals that promise threshold performance and proposals that credibly offer objective performance. Source selection loses discrimination. Operational capability lands at the floor.
Writing threshold and objective in prose footnotes rather than formal requirement fields is the most common middle path — and it is almost as bad as either extreme. Prose clarifications get lost in document revisions. They are not parsed by requirements management tools. They do not appear in traceability matrices. They are invisible to verification engineers who only read the “shall” statement. The requirement needs to carry both values as first-class data, not as explanatory text in parentheses.
How Modern Tools Implement This
Most legacy requirements management tools — IBM DOORS, Jama Connect, Polarion — can store threshold and objective values, but they do so through custom attributes added to text-based requirement records. The values exist in the database, but they are not structurally connected to the requirement in a way that makes them visible in traceability, automatically surfaced during design reviews, or queryable in trade analysis workflows. A systems engineer who wants to know “which of our KPPs are currently predicted to land between threshold and objective” typically has to build that view manually from multiple sources.
Flow Engineering takes a different approach, treating requirements as structured, parameterized objects rather than documents with metadata. In Flow Engineering, a requirement with threshold and objective values is represented as a node in the system model with both values explicitly typed — not as free text in an attribute field, but as formal parameters with units, tolerances, and traceability. When a design review occurs, the model can show, for each parameterized requirement, where the current design prediction sits relative to both threshold and objective simultaneously.
This matters operationally. During a PDR preparation cycle, a systems engineer can query Flow Engineering to identify which requirements have design margins that are currently compressing toward threshold — meaning decisions made in the next design iteration could push the program out of compliance — versus which requirements have design predictions comfortably above threshold and trending toward objective. That is a different conversation from “are we compliant with our requirements,” and it is the conversation that actually drives useful PDR outcomes.
Flow Engineering’s graph-based model also preserves threshold/objective pairs through requirement flow-down automatically. When a top-level KPP is allocated to subsystem specifications, the parameterized structure — including both values and their types — flows with the allocation rather than requiring a systems engineer to manually re-enter objective values into a subcontractor spec template. This is where the value collapse problem described earlier is most often prevented.
The platform does not replace engineering judgment about where to set threshold and objective values — that remains a systems engineering discipline requiring domain knowledge, risk analysis, and customer negotiation. What it removes is the structural failure mode where correct values, negotiated carefully at the RFP stage, quietly disappear during the document-to-database translation.
Practical Starting Points
If your program is currently using single-value requirements and experiencing frequent waivers or requirements changes during preliminary design, the threshold/objective structure is worth adopting in the next requirements baseline cycle. Three steps get you started:
First, identify your KPPs and TPMs. Key Performance Parameters and Technical Performance Measures are the natural home for threshold/objective pairs. Start there rather than trying to retrofit the structure onto every derived requirement in the specification.
Second, negotiate objective values explicitly with the customer. If your RFP only specifies threshold values, ask the customer to formally state objective values before contract award. “We want the best range you can give us” is not an objective value. “We want 350 km and will evaluate proposals that approach it more favorably” is.
Third, carry both values into verification planning. Every KPP with a threshold/objective pair should have verification events planned against both values — even if the objective verification is lower-rigor. This creates the feedback loop that makes the structure useful through the full program lifecycle, not just in source selection.
The threshold/objective distinction is not a bureaucratic formality inherited from MIL-STD documents. It is a formal mechanism for preserving design flexibility, enabling honest trade analysis, and avoiding the requirements churn that consumes program schedule during the phase when engineering decisions actually matter.