Guidance

How to score Digital Outcomes and Specialists suppliers

How to score when you evaluate digital outcomes, digital specialists and user research participants suppliers

You must score shortlisted Digital Outcomes and Specialists suppliers against the criteria you published with your requirements. You can use the scoring template to score the evidence that suppliers provide.

Scoring evaluation criteria

You must score outcomes and specialists suppliers on:

  • technical competence, for example how well the supplier’s skills or proposal meet your needs
  • cultural fit, for example how the supplier will work in your organisation
  • price of the proposal

You must score user research participants suppliers on:

  • technical competence, for example how well the supplier’s skills or proposal meet your needs
  • availability, for example they can recruit participants when you need them
  • price of the proposal

You must score user research studio suppliers on:

  • technical competence, for example how well the studio meets your needs
  • price

Find out how to score user research studio suppliers in the how to hire user research studios guide.

Make sure you buy fairly. Don’t allow a supplier’s price to influence the score you give them for technical competence and cultural fit.

The scoring process

  1. Score suppliers on each criteria for technical competence and cultural fit.
  2. If you published points with your evaluation criteria, give points to each one.
  3. Calculate an overall score for technical competence and cultural fit.
  4. Calculate an overall score for availability (user research participants only).
  5. Weight overall scores for technical competence, cultural fit and availability.
  6. Score the price of suppliers’ proposals or day rates.
  7. Weight the score for price.
  8. Calculate a total score for each supplier.
  9. Find the winning supplier.

1. Score suppliers on each criteria for technical competence and cultural fit

You must score shortlisted suppliers:

  • using the criteria you published with your requirements
  • after each assessment

Score each criteria individually. Write notes on the evidence the supplier has given in all assessment methods for each criteria and give each criteria a score using the scoring scheme. You can’t use half scores. Exclude suppliers who get less than 2 for any essential skills or experience criteria.

Score Description
0 Not met or no evidence
1 Partially met
2 Met
3 Exceeded

Your evaluation team must score each supplier individually and mustn’t share scores until all suppliers have been scored using every assessment method.

When your evaluation team has finished scoring all suppliers, you need to agree a score for each criteria. You can’t take an average of the evaluation team’s scores. You must:

  • discuss why each evaluator gave each supplier the score they did for each criteria
  • reach an agreement on each supplier’s score for each criteria

Read how to buy fairly.

2. If you published points with your evaluation criteria, give points to each one

You can weight individual criteria by adding ‘points’ to each criteria you want to evaluate. You can only do this if you said you would when you published your requirements and evaluation criteria.

To weight individual criteria, multiply the score you gave each criteria by the weighting you gave to that criterion.

Example:

If you give ‘experience designing services for users with low digital literacy’ 15 points and supplier A scored 2 for it, multiply 15 by 2. Supplier A gets a weighted score of 30 points.

3. Calculate an overall score for technical competence and cultural fit

When you’ve given each criteria a score, you need to calculate the overall score for technical competence and the overall score for cultural fit.

Add up all the scores for technical competence (essential skills and experience, nice-to-have skills and experience and proposal)

If you weighted individual criteria, add up the weighted scores for each question.

Repeat this process to get an overall score for technical competence and an overall score for cultural fit.

Example for technical competence:

When you published your digital outcome requirements, you said you’d evaluate technical competence on 5 criteria.

  Individual criteria Supplier B's score
Essential skills and experience Must have python development expertise 2
Must have experience designing services for users with low digital literacy 2
Nice-to-have skills and experience It would be good if the supplier has evidence of delivering at scale 1
Proposal criteria How well the proposed technical solution meets requirements 3
How the approach or solution meets the organisation or policy goal 2
    Overall score = 10

4. Calculate an overall score for availability (user research participants only)

If you’re evaluating user research participant suppliers, you must score them on availability, for example whether the supplier can recruit user research participants when you need them to.

Availability should be scored from 0 to 2. You must exclude suppliers who score 0 for availability.

Score Description
0 Not met or no evidence
1 Partially met, for example if they can provide some of the participants when you need them or if 3 out of 5 participants can come to the user research studio and the other 2 can only do phone interviews
2 Met, for example they are available when you need them

5. Weight overall scores for technical competence, cultural fit and availability

When you have a score for technical competence and a score for cultural fit or availability, you need to apply the weighting you published with your requirements to calculate a supplier’s overall score for each criterion.

If you didn’t weight individual criteria

To weight overall scores for technical competence or cultural fit:

  1. Calculate the maximum possible score for technical competence or cultural fit by multiplying the number of questions you asked by 3 (the maximum score for each question).
  2. Take the supplier’s overall score for technical competence or cultural fit and divide it by the maximum score possible.
  3. Multiply this by the weighting you gave to that criteria when you published your requirements.

Example for technical competence:

You said technical competence is worth 60% when you published your requirements and evaluation criteria.

You asked 5 questions on technical competence. To get the maximum score possible for technical competence, multiply 5 by 3 to get 15.

If supplier B got an overall score of 10 on technical competence, divide 10 by 15. Multiply the answer (2 thirds) by 60 (your weighting for technical competence). Supplier B’s weighted score for technical competence is 40 (out of a possible 60).

If you weighted individual criteria

To weight overall scores for technical competence or cultural fit:

  1. Add up the possible points for technical competence for each question and then multiply the total by 3 (the maximum score for each question).
  2. Take the supplier’s overall score for technical competence and divide it by the maximum score possible.
  3. Multiply this by the weighting you gave to that criteria when you published your requirements.

Example if you weighted individual criteria:

You said technical competence is worth 60% when you published your requirements and evaluation criteria.

You asked 5 questions on technical competence:

  • must have Python development expertise - 15 points
  • must have experience designing services for users with low digital literacy - 10 points
  • it would be good if the supplier has evidence of delivering at scale - 5 points
  • how well the proposed technical solution meets requirements - 15 points
  • how the approach or solution meets the organisation or policy goal - 20 points

To get the maximum score possible for technical competence, add up the points for all technical competence questions (15+10+5+15+20) and multiply by 3 to get 195.

If supplier C got an overall score of 135 on technical competence, divide 135 by 195 and multiply by 60 (your weighting for technical competence). Supplier C’s weighted score for technical competence is 41.52 out of 60.

If you are recruiting user research participants

To weight overall scores for availability:

  1. Divide the supplier’s score for availability (0-2) by the maximum score for availability (2).
  2. Multiply this by the weighting you gave to availability when you published your requirements.

Example:

You said availability is worth 40% when you published your requirements and evaluation criteria.

Supplier D scored 1 out of 2 for availability. Divide 1 by 2 and multiply it by 40. Supplier D’s weighted score for availability is 20 out of 40.

6. Score the price of suppliers’ proposals or day rates

You’ll need to score price based on how close each supplier’s quote is to the cheapest supplier’s quote. The way you score suppliers on price depends on whether you’re buying digital outcomes, specialists or user research participants.

If you think the supplier has offered an unusually low quote, you must ask them to explain their quote. If the supplier’s explanation isn’t good enough, you may need to exclude them.

For more information on what to do if you receive an unusually low quote (and when you can reject it) see Regulation 69, sections 4 to 7, of the Public Contracts Regulations 2015.

Scoring price for digital outcomes

The way you score suppliers on price depends on how you said you wanted them to provide a price for their proposal. You must use the same method to score all suppliers.

Fixed quotes

To score fixed price quotes, you must divide the cheapest quote by each supplier’s quote.

Example:

  • supplier A’s quote is £15,000
  • supplier B’s quote is £10,000
  • supplier C’s quote is £30,000

To calculate a score for supplier A, divide 10,000 by 15,000. Supplier A scores 0.667.

To calculate a score for supplier B, divide 10,000 by 10,000. Supplier B scores 1.

To calculate a score for supplier C, divide 10,000 by 30,000. Supplier A scores 0.333.

Flexible quotes

To score time and materials and capped time and materials quotes, you must:

  • calculate a total score for each supplier by multiplying the day rates for the individuals who’ll be doing the work by the number of days the supplier said each role would be needed to complete the work
  • find the cheapest quote
  • divide the cheapest quote by each supplier’s quote

Example:

Supplier A estimated it would take 20 days to do the work. Their team would be needed for all 20 days and would be:

  • 5 developers at £500 per day
  • 1 technical architect at £700 per day
  • 1 product manager at £600 per day

Multiply 500 by 5 and add 700 and 600 to get the total cost for 1 day. Then multiply by 20 days. The estimated total cost for supplier A is £76,000.

Use this method to calculate the estimated total price for all suppliers:

  • supplier A costs £76,000
  • supplier B costs £75,000
  • supplier C costs £100,000

To calculate a score for supplier A, divide 75,000 by 76,000. Supplier A scores 0.987.

To calculate a score for supplier B, divide 75,000 by 75,000. Supplier B scores 1.

To calculate a score for supplier C, divide 75,000 by 100,000. Supplier C scores 0.75.

Scoring price for digital specialists

Score the price for digital specialist roles using the day rate they submitted when they applied for the work. Suppliers can’t change their day rates.

You must divide the cheapest day rate by each supplier’s quote.

Example:

  • supplier A’s day rate for a developer is £500
  • supplier B’s day rate for a developer is £600
  • supplier C’s day rate for a developer is £350

To calculate a score for supplier A, divide 350 by 500. Supplier A scores 0.7.

To calculate a score for supplier B, divide 350 by 600. Supplier B scores 0.58.

To calculate a score for supplier C, divide 350 by 350. Supplier A scores 1.

Scoring price for user research participants

Suppliers will provide a total price per participant recruited in their proposal. The participant price has to include all incentives, recruitment and any travel and other expenses paid to participants.

You must divide the cheapest quote by each supplier’s quote.

Example:

  • supplier A’s quote is £120 per participant
  • supplier B’s quote is £100 per participant
  • supplier C’s quote is £90 per participant

To calculate a score for supplier A, divide 90 by 120. Supplier A scores 0.75.

To calculate a score for supplier B, divide 90 by 100. Supplier B scores 0.9.

To calculate a score for supplier C, divide 90 by 90. Supplier C scores 1.

7. Weight price

When you have scored the price for each supplier you need to apply the weighting you published with your requirements.

Multiply the supplier’s score by the weighting for price.

Example:

You said price is worth 20% when you published your requirements and evaluation criteria.

Supplier E scored 0.7 for price. Multiply 0.7 by 20. Supplier E’s weighted score for price is 14 out of 20. $E

8. Calculate a total score for each supplier.

To calculate each supplier’s total weighted score, add up their weighted scores for technical competence, cultural fit or availability and price.

Example:

Supplier C’s weighted scores were:

  • 43.077 out of 60 for technical competence
  • 16 out of 20 for cultural fit
  • 14 out of 20 for price

Their total weighted score is 72.077 out of 100.

9. Find the winning supplier

Add up the scores from the evaluation and shortlist stages. The winning supplier is the one with the highest total score.

What to do if there’s a tie

If 2 or more suppliers have the same score, you can either:

  • use the score from the criteria with the highest weighting, then the next highest weighting until the tie is broken, for example if you’ve weighted price as the most important criteria then the winning supplier is the one with the highest score for price
  • ask the tied suppliers to provide ‘best and final’ quotes. The winning supplier is the one with the lowest quote

What to keep a record of

You must keep a record of how you’ve made your decisions, including the evaluation team’s individual and agreed scores, and any communication you have with suppliers.

Updates to this page

Published 25 April 2016
Last updated 1 October 2019 + show all updates
  1. Clarifying that the winning supplier has the highest combined score from both the evaluation and shortlisting stages. Changed term 'user research lab' to 'user research studio' for consistency with other guidance.

  2. First published.

Sign up for emails or print this page