Job evaluation in the UK: Provisional findings from E-reward survey

Like it or not, organisations attach different values/worth to the people that work in them and the jobs that they fill. This 'worth' may be explicit or hidden. It may be based on a comparison of the job requirements, the external market, the individuals’ performance or a combination of these. Evaluating this 'worth' leads directly or indirectly to how much an organisation pays someone. It may describe exactly where an individual fits within the organisation’s pay system or it may provide a broad framework for decision-making.

Even amongst organisations that do not claim to have a formal basis for establishing individual or job worth, decisions about where and how to place individuals within an organisation are taking place all the time. Typically, this includes a view about the relative worth of the job as well as the job holder. This is where job evaluation comes in. It is about making decisions on the relative worth of the job to the organisation. There are many different ways of doing it – from the very simple, to the extremely sophisticated. For this reason there is always a demand for insights into what job evaluation is and how it works and this is why E-reward has conducted this ambitious survey.

Here are some provisional findings. The full report will be emailed to survey participants in the next couple of weeks. It will also be available for download as part of a subscription to our Reward Blueprints.

The E-reward survey

  • This job evaluation survey was conducted by E-reward between June and July 2017.
  • A total of 98 UK-based organisations responded to our detailed online questionnaire.

Extent of job evaluation among survey respondents

  • More than three-quarters (77.6%) of respondents have a formal job evaluation scheme in place.
  • More than half (54.5%) of respondents with no formal job evaluation scheme plan to introduce one at some point.
  • Just over a quarter (27.6%) of respondents plan to make changes to their job evaluation scheme in the near future.

Types of job evaluation used

  • ‘Point factor rating’ schemes are the most popular type of scheme among our sample (68.4%), followed by ‘levelling’ (26.3%) and ‘analytical job matching’ (25.0%).
  • More than half (52.6%) of our respondents use an off-the-shelf job evaluation scheme, while a quarter (25.0%) use a hybrid scheme and around one in five (22.4%) use a tailor-made scheme.
  • Korn Ferry Hay Group is the most commonly used proprietary scheme among our sample (50.0%), followed by Willis Towers Watson (23.2%).

Purpose and effectiveness of job evaluation

  • The three main aims of job evaluation among our respondents are managing internal job relativities (88.2%), providing a basis for the design and maintenance of a rational and defensible pay structure (68.8%) and comparing internal pay levels with market rates (76.3%).
  • More than a quarter (26.3%) of respondents are ‘highly satisfied’ and well over half (59.2%) are ‘reasonably satisfied’ with their current approach to job evaluation.
  • But almost one-in-ten (9.2%) respondents say they are ‘not very satisfied’ with their job evaluation schemes.
  • The top three job evaluation problems reported by respondents are managers and employees not understanding how job evaluation works (63.2%), the scheme not preventing grade drift (34.2%) and the scheme taking too much time to operate (23.7%).

Technology in job evaluation

  • Two-thirds (66.2%) of respondents use technology in some way to assist with job evaluation.
  • Nearly all (93.9%) of this group maintain a database of their job evaluation scores and rationales, while almost a quarter (22.4%) use online forms to collect job information.
  • The top three reported benefits of using technology are better recording of evaluation decisions (79.6%), less paperwork (69.4%) and more consistent evaluation (65.3%).
  • But respondents also reported some problems with using technology in job evaluation, with nearly a quarter (22.4%) having concerns over a perceived lack of transparency over job scores.

Analytical schemes

  • The number of factors in the analytical schemes ranges from 3 to 14, with a median of 6.
  • More than half (52.2%) of respondents report that the factors they use have not been designed to reflect or link to competencies in their competency framework, while just over a third (34.3%) said they had been only partly designed with these in mind.
  • Only around one-in-eight (13.4%) schemes were designed with factors based entirely on respondents’ own competency frameworks.
  • Two-thirds (65.1%) of respondents use even scales to score jobs – e.g. 10, 20, 30, 40 – with the remainder (34.9%) using progressive scales – e.g. 10, 20, 35, 55.
  • Well over two-fifths (43.9%) explicitly weight certain factors in their schemes – i.e. give them more points – while less than one-in-five (17.5%) use implicit weightings – i.e. give some factors extra levels.
  • Just over a third (38.6%) of respondents do not use weightings at all.

Job evaluation and pay structures

  • Nearly a third (30.7%) of respondents have designed a new pay and grading structures with the help of job evaluation in the last four years.
  • More than a third (39.6%) of respondents with a broad banded grading structure define the boundaries between grades in terms of job evaluation points and allocate jobs to bands on this basis.
  • Where there is a conflict between internal equity and external competitiveness, more than two-fifths (43.7%) of respondents opt to pay a market supplement on top of salary to maintain competitiveness.
  • Others choose to pay market rates and ignore internal equity (28.2%).
  • Almost half (44.4%) of respondents inform all employees of their job evaluation scores automatically. But just over a third (38.9%) only inform individuals who lodge an appeal.
  • Just over a quarter (25.4%) of respondents report problems with either informing – or not informing – employees of job evaluation scores, with concerns typically involving an increase in appeals or perceptions of a lack of clarity in the job evaluation process.