Every public sector tender you bid on comes with an evaluation methodology. Understanding how evaluators score your response — before you start writing — is one of the most underleveraged advantages in public procurement. Most agencies write bid responses without fully understanding the mechanics of how they will be assessed. The ones who do score consistently higher.
This guide explains how public sector evaluation criteria work, what MEAT means in practice, how quality and price are typically weighted in design and digital contracts, and what you can do to score more points on each criterion.
What Is MEAT?
MEAT stands for Most Economically Advantageous Tender. It is the standard evaluation method used in UK public procurement under the Public Contracts Regulations 2015 and the Procurement Act 2023.
Despite the word "economically," MEAT does not mean cheapest. It means the bid that offers the best overall value — taking into account both quality and price, weighted according to the buyer's stated priorities.
Since October 2022, the preferred term in government guidance has shifted to MAT — Most Advantageous Tender — under the Procurement Act 2023, which broadens the evaluation criteria to include non-economic factors like social value. In practice, both terms appear in live procurement documents, and the underlying mechanics are the same.
The key point: price is never the only criterion. In most design and digital contracts, quality accounts for 60–70% of the total score. A competitive price matters, but it rarely wins a contract on its own.
Quality/Price Splits in Design and Digital Contracts
The typical quality/price split for design, UX, and digital delivery contracts in UK public sector procurement is:
| Contract Type | Quality Weight | Price Weight |
|---|---|---|
| Strategy and discovery work | 70–80% | 20–30% |
| Design and UX delivery | 60–70% | 30–40% |
| Full digital build (design + dev) | 60% | 40% |
| Framework call-offs (e.g. DOS7) | Often 70% | 30% |
| Larger capital programmes | 50–60% | 40–50% |
The implication: if a contract is 70% quality / 30% price, a supplier who scores 90% on quality and prices 20% above the cheapest competitor will almost always beat the cheaper supplier who scores 70% on quality.
Many agencies underprice in a misguided attempt to win on cost. This is not only bad for margins — it is often not necessary. Understand the weighting before you decide your pricing strategy. For a detailed breakdown of how price scoring works, see our guide to pricing public sector bids.
How Quality Criteria Are Structured
Quality sections in an ITT (Invitation to Tender) or SQ (Selection Questionnaire) typically consist of between three and eight separately weighted questions. Common criteria in design and digital contracts include:
- Methodology / approach — how you plan to deliver the work
- Case studies / relevant experience — evidence of comparable previous work
- Team and resource plan — who will work on the contract and at what capacity
- Understanding of requirements — how well you have grasped the brief
- Social value — your commitments to employment, community benefit, and net zero
- Project management and risk — how you will manage delivery, dependencies, and risks
- Innovation — evidence of creative or technical approaches that add value
Each criterion has an explicit weighting (e.g. 20% of total score, or 15 out of 100 points). These weightings are always published in the procurement documents. Read them carefully — they tell you exactly where to invest your writing time. For a full guide to structuring your written responses, see how to write a winning public sector bid.
How Scoring Works
Most evaluators use a 0–5 or 0–10 scoring scale per criterion, applied by multiple independent assessors, then moderated into a consensus score. Some buyers use percentage-based scoring (0–100%) with the highest scorer normalised to the maximum available marks.
The typical scoring descriptors on a 0–5 scale are:
| Score | Descriptor |
|---|---|
| 0 | Non-compliant / no response |
| 1 | Poor — significant gaps, vague, unconvincing |
| 2 | Adequate — meets minimum requirements but limited evidence |
| 3 | Good — clear, credible, some evidence of capability |
| 4 | Very good — strong evidence, well-structured, specific |
| 5 | Excellent — outstanding, differentiating, compelling evidence |
The difference between a 3 and a 5 on a criterion worth 20% of the total score is significant. On a 100-point contract scored at 70/30 quality/price, a 3 vs 5 on a 20-point criterion is worth 8 quality points — roughly equivalent to being 25% cheaper than your nearest competitor.
The practical takeaway: generic, well-written responses score 3s. Specific, evidenced, tailored responses score 4s and 5s.
What Evaluators Are Actually Looking For
Evaluators in UK public sector procurement are not experts in design. They are typically procurement officers, senior stakeholders, and occasionally subject matter experts. Their job is to apply the published scoring rubric consistently across all bids received.
This shapes what works:
Specificity beats generality. "We take a user-centred approach" scores lower than "We conducted 24 user research sessions with NHS community nurses to identify the drop-off points in the digital referral pathway, resulting in a 37% reduction in incomplete submissions." The second response gives the evaluator something they can point to and award marks against.
Evidence beats assertion. Any claim you make needs supporting evidence — a case study reference, a metric, a named client (if allowed), a methodology tool, a team credential. Evaluators are instructed to discount assertions they cannot verify.
Structure helps evaluators score. Long prose responses are harder to evaluate than responses that mirror the question structure, use subheadings or short paragraphs, and directly address each element of the question. If the question has three parts, your response should have three clearly labelled sections.
Social value is scored, not noted. Many agencies treat social value as a box-ticking exercise. Buyers are increasingly asking for specific, measurable commitments — apprenticeship targets, supply chain SME spend percentages, carbon reduction plans, accessibility delivery commitments. A vague commitment to "supporting local communities" typically scores 2 or 3. A specific commitment with a measurable outcome and a reporting mechanism scores 4 or 5.
The Price Evaluation Mechanics
The most common pricing evaluation method in UK public procurement is normalised scoring — the lowest price receives the maximum available marks, and all other prices are scored relative to the lowest:
Price score = (Lowest price ÷ Your price) × Maximum price marks
So if the maximum price score is 30 points and the lowest bid is £80,000:
- A bid of £80,000 scores 30/30
- A bid of £90,000 scores 26.7/30
- A bid of £100,000 scores 24/30
The price gap between second and third cheapest is often small in absolute score terms. If you are strong on quality, pricing within 10–15% of the market rate is usually competitive. Pricing significantly below market rate — without a clear reason — raises evaluator concerns about deliverability.
Some buyers use a price envelope approach instead: bids within a specified price band all score maximum marks, and price differentiation only applies to bids outside the band. This approach is more common in higher-value strategic contracts.
Always read the evaluation methodology section of the procurement document carefully. The scoring method for price is always specified.
Pass/Fail Gates Before Scored Evaluation
Most procurement processes include mandatory pass/fail requirements that must be met before scored evaluation begins. These are not scored — they are compliance gates. Common pass/fail criteria include:
- Company turnover minimum (typically 2× annual contract value)
- Professional indemnity insurance threshold
- Cyber Essentials certification
- ISO 27001 or equivalent information security standard
- Financial health checks (credit rating, insolvency history)
If your bid fails a pass/fail criterion, it is typically excluded from scored evaluation regardless of bid quality. For smaller agencies, the turnover multiple requirement is the most common disqualification point — especially on higher-value contracts. Check these gates before investing time in a bid.
Tender Evaluation Timelines
Understanding the buyer's evaluation timeline helps you plan follow-up and manage expectations. Typical timelines after bid submission:
- Clarification round: 1–2 weeks after submission. Buyers may raise written clarification questions — respond clearly and on time.
- Moderation: 2–4 weeks. Individual evaluators score independently, then moderate to consensus.
- Standstill period: 10 calendar days after notification of award decision. Unsuccessful bidders can request a debrief.
- Contract signature: After standstill, unless challenged.
Total evaluation time from submission to contract signature typically runs 4–8 weeks for mid-size contracts. On framework call-offs via DOS7 or similar, timelines can be compressed to 2–3 weeks.
Practical Implications for Smaller Agencies
Many design and digital agencies hesitate to bid on public sector contracts because they assume the evaluation process favours larger suppliers. In practice, the evaluation criteria in most design and digital contracts are designed to assess capability and quality of approach — not organisational size.
The Procurement Act 2023 includes specific provisions to make it easier for SMEs to compete: proportionate financial requirements, disaggregated contracts, and a requirement for buyers to consider SME participation. Framework agreements like DOS7 are explicitly accessible to SMEs.
The competitive advantages smaller agencies genuinely have in evaluation:
- Responsiveness and named team continuity — evaluators value knowing exactly who will be working on the contract
- Specialist sector depth — agencies focused on specific public sector verticals (NHS, councils, education) can evidence deep domain knowledge that generalist suppliers cannot match
- Agile delivery track record — smaller agencies often have stronger evidence of iterative, GDS-aligned delivery
- Social value authenticity — credible local employment and supply chain commitments are easier for smaller, regionally-based agencies to make and evidence
The agencies that lose despite strong quality scores typically lose on one of three things: pricing too far above market, failing pass/fail gates they didn't check in advance, or submitting generic responses to scored questions.
Stop missing relevant tenders.
Tandara scans UK public procurement daily and sends matching opportunities to your inbox. 14-day free trial, no card required.
Using Tandara to Focus Your Bid Investment
Every bid you write takes time. Understanding evaluation criteria only helps if you are bidding on the right contracts in the first place.
Tandara monitors UK public procurement portals daily and filters opportunities specifically for design and digital agencies — surfacing tenders that match your capability profile before they close. The service gives you earlier sight of relevant opportunities, so you can triage on your own terms rather than scrambling when a tender appears with a 2-week window.