Skip to main content

Measurement error

Measurement error occurs from failing to collect the true data values from respondents. Potential sources of measurement error in Graduate Outcomes are: the survey instrument(s); the telephone interviewers, and the respondents themselves. This section of the report covers these aspects, in turn. The mode of data collection is also a source of measurement error, and we cover this in more detail in the next section.

Respondent error

The survey takes the following measures to minimise respondent error. We cognitively tested the survey questions prior to launch, and adapted our questionnaire design in the light of the research findings. Information on cognitive testing is available in a technical report[1] and an outcomes report.[2] The implementation of the survey questions in the survey instrument was undertaken with expert input and testing from HESA and our suppliers, in order to pro-actively identify and overcome potential respondent error issues.

The survey instrument is available in both English and Welsh languages. This allows respondents graduating from providers in Wales to use whichever language they prefer. This should reduce respondent error due to language issues.

The instrument is deployed online, and over the telephone, which offers respondents some choice over how to engage. Details about the implementation of the instrument can be found in the Survey methodology sections dealing with the online[3] and telephone[4] based aspects of our approach, and these materials also contain further information about how we seek to minimise respondent error. Online, we use a series of prompts to encourage the respondent to check the accuracy of their responses. Over the telephone, our interviewers’ script similarly prompts operatives to elicit accurate responses through checking understanding back with the respondent. (We will from now on refer to the computer-assisted telephone interviewing by its widely-accepted acronym – CATI.)

Some examples of respondent error we believe may occur are:

  • Information retrieval may be difficult for those respondents reporting several jobs. They may not remember precisely, or may not have access to, information about, for example, their previous earnings for a job they left months beforehand.
  • Brevity or lack of response to free text questions could lead to differences in SOC codes for graduates in similar jobs. This equally applies to other coded free-text data. However, the SOC coding process would be more sensitive to this sort of issue, than, for example, free text country data, as the input data is more extensive, and there is some degree of semantic overlap between the output codes.
  • Cases where respondents select unemployed and paid work simultaneously. (During the first year of the survey of the respondents in paid work for an employer, 950 had also indicated they are unemployed. Of these, 270 had said that being unemployed was their most important activity). In the second year of the survey, of the respondents in paid work for an employer, 1,085 had also indicated that they are unemployed. Of these, 330 had said that being unemployed was their most important activity. Comparatively, in year three of the survey 1050 paid work respondents indicated that they were also unemployed, of which 255 indicated being unemployed was their main activity and in the fourth year, 750 graduates selected both paid work for an employer and unemployed, with 200 of these indicating unemployment was the most important activity.[5]
  • Acquiescence bias (sometimes called agreement bias, ‘straight-lining’, or alternatively referred to as ‘yea-saying/nay-saying’) is where there is a tendency on the part of respondents to indicate positive (or negative) responses in a routine fashion, perhaps not reflecting their ‘true’ feelings. HESA is continuously reviewing the impact of survey design on response distribution where there is a potential for such bias and is reported under subsequent sections on Data Quality.
  • Social desirability bias occurs where respondents tend to give socially desirable responses instead of choosing responses that are reflective of their ‘true’ situation. Examples where this could occur might include reporting a higher salary, or a greater sense of subjective wellbeing (SWB). Other studies have indicated that this kind of bias may vary by mode of response.

For details of our investigations into these forms of respondent error, readers are directed to the Reliability of sensitive data section, where we discuss our analysis of the data. While further work is required to investigate the extent of these forms of bias on the survey, we are able to show the current extent of our understanding of their effect.

In the dissemination section of the Graduate Outcomes Survey methodology, details are given about how HESA interprets and publishes responses.[6] In the section of the Survey methodology covering key data concepts and standards, explanations are given around the analysis that has been carried out on a number of key data items. In the section on salary, there is specific information about the approach HESA has taken to handling any potential respondent error. This includes an update to the approach we have taken in trimming the salaries to exclude outliers, and future corrective actions, including improvements to the instrument to reduce the risk of misunderstanding that leads to respondent error.

One limitation on the respondent’s ability to correct their own errors is the unavailability of a ‘back’ button in the online survey. Respondents are therefore unable to go back and change their answers to previous questions. This is done largely for data protection reasons (this is covered at greater length in the section of the Survey methodology on the online survey design);[7] it also reduces the risk of ‘orphaned’ data occurring, where a respondent enters data that is not required when they subsequently return to an earlier point in the survey to make an alternative choice, which consequently alters their survey routing.

We are aware that more evidence needs to be gathered on whether respondent error represents a significant issue in the survey. For instance, for those who stated in the survey that they were undertaking further study in the UK HE sector, there is the potential to link their response to the HESA student record. This would offer the opportunity to evaluate the extent of measurement error in this part of the survey. Further investigations have been undertaken into this issue, and an interim digest of these is covered in the Graduate Outcomes and the HESA Student record section.

Survey instrument error

Significant effort is invested in reducing opportunities for instrument error, and the first element of this is the choices of platforms, partners, and personnel involved. HESA manages the survey and appoints the suppliers.[8] HESA’s procurement and supplier management approaches seek to ensure that suppliers deliver on process quality requirements imposed by HESA. Forsta (formerly Confirmit) remains HESA’s feedback management solution supplier. Forsta’s technology is widely used to conduct surveys by leading sector bodies, including the Office for National Statistics, and also in market research contexts. It includes a smartphone compatible online system. HESA’s current contact centre provider is IFF research. IFF has worked with many individual providers, previously, in their delivery of Graduate Outcomes predecessor DLHE. IFF was also the survey contractor for all six iterations of the Longitudinal DLHE survey.

The survey instrument is ultimately HESA’s responsibility, and HESA is an official statistics producer with a track record in delivering the DLHE and LDLHE (Longitudinal Destinations of Leavers from Higher Education) surveys for over twenty years as well as a successful launch of the Graduate Outcomes survey with ‘a range of positive features that demonstrate the trustworthiness, quality and value of the statistics’.[9] HESA’s staff are skilled across the range of statistical business processes, including developing the methodologies, procuring survey and coding services, developing and commissioning software systems, data processing and enrichment, quality assurance, conducting and commissioning research, analysis, dissemination, and undertaking reviews. Users can therefore trust that the survey is being delivered by an organisation with experience and skill in appropriate professional domains.

The instrument was tested thoroughly by staff from HESA, IFF, and Forsta prior to deployment. However, the complexity of the survey routing meant that some less likely routing combinations were only tested to a limited extent. All problems discovered during testing were fixed prior to launch. We also note that Forsta nominated HESA the judges’ choice in their ‘Achievement in Insight and Research’ awards in September 2019 in recognition of the high standards, creativity and innovation with which their platform is being used.

HESA demonstrates an evidence-based approach to operational data quality management, backed up by a clear governance approach. A log is kept of all instances of potential instrument error and a process is operated to investigate and assess each issue for the level of its impact. This approach is substantiated by regular progress updates, which explain these same issues to stakeholders.[10]

We summarise the main sources of potential instrument error relating to year two of the survey in the following subsections.

Survey alterations to increase retention and improve data quality

The following changes to the survey were introduced in year three to improve respondent retention (i.e. reduce item and unit non-response) and data quality:

  • Introduction of information buttons for hover texts to provide reassurances on sensitive questions in the survey (e.g. employer’s name and salary)
  • Optimisation of the presentation of Graduate Voice questions
  • Simplified wording of the town/city questions to improve comprehension and provision of useable information
  • Contextual information added to one of the categories under ‘Type of Qualification’ to aid understanding
  • Additional validation around postcode to encourage respondents to provide partial information instead of a ‘don’t know’ response

Further changes to the survey were made in year four, and included:

  • Further refinement of the hover text on employer’s name 
  • The addition of a drop-down list for the town/city question 
  • Additional information provided for the salary question on the desktop mode for respondents who start to leave to survey. Text will let respondents know that the question is optional, in order to encourage them to continue
  • Removal of questions that were deemed to be no longer required for data capture (reduce graduate burden and survey fatigue)

Email and SMS delivery

Where providers have supplied email addresses for graduates on their domain e.g. joe.bloggs@[provider].ac.uk, they are advised to be mindful of the expiry period for these addresses. Some providers allow graduates to keep these addresses for life, others expire them after a fixed period (e.g. six month post-course completion). These email addresses should only be returned as valid graduate contact details for Graduate Outcomes when they are still live accounts on providers’ systems. Where providers are satisfied that the provider domain email address will be live at the point of HESA contact, we have suggested that providers allow the relevant email sender address which will be [providername]@graduateoutcomes.ac.uk. This will help ensure these emails are delivered successfully. It is important that provider domain email addresses are still live as this has an impact on HESA’s IP address reputation. Should provider domain email addresses be shut down at the start of the survey period, this may lead to our emails bouncing and our IP address being deny-listed. This would put a halt to HESA’s email capability thus restricting our surveying to phone or SMS only. Providers are therefore further incentivised to pay attention to this quality factor.

In a summary of our research[11] on the effectiveness of various contact details it was concluded that in order to maximise our chances of contacting graduates we need the following:

  • As far as possible a mobile number for every graduate.
  • At least one mobile number and email address should be supplied for UK graduates. These should help obtain good online and telephone response rates.
  • An ac.uk email address is generally not important and is less likely to perform well. A personal email address for every graduate and as far as possible one must ensure it is the current address for the individual.
  • Contact details should be continuously updated during the survey field period to give us the best chance to contact your graduates.

Email delivery rates continue to be extremely high in every round of invitations, above 95%. SMS delivery ratesare also high and regularly exceed 80%. Completion via SMS link was responsible for 31% of all the online survey responses received during cohort D.

At the beginning of each cohort HESA conducts a quality assessment on the completeness of the contact details record and identifies providers with the worst set of contact details. These providers are notified via targeted contact and asked to rectify the issues identified. This exercise has mixed outcomes. Some providers are able to provide more and improved contact details while others are unable to do so or do not engage with the process. HESA has undertaken further work during year four to streamline this process and has made the outcomes more visible to statutory organisations. Research has indicated that there is a correlation between low response rates and providers with a low coverage of emails and mobile numbers. Further work is ongoing and includes the implementation of additional quality rules and a continuation of work with providers and statutory organisations.

Call handling

There are numerous indicators suggesting that the telephone interviewing component of Graduate Outcomes and call handling approach described in the previous edition of this report, is now firmly established and delivering successful outcomes for the project. Some of the main highlights of this year’s operations were:

  • Stable response rates (around 60% of responses are collected over the telephone)
  • The continued strength of the collaborative and joined-up partnership between HESA and IFF, which ensured they were able to build on successes in previous years whilst also navigating new challenges.
  • Improved sample management, owing to the detailed analysis conducted as part of the Year two review.
  • Continued focus on high quality data collection and quality control processes.

Interviewer error

Interviewer error is the effect of a human interviewer on the data gathering process. Graduate Outcomes uses many interviewers concurrently. CATI interviewers undergo training developed especially for the Graduate Outcomes survey, and which focuses on the contextual knowledge interviewers need to perform their roles effectively. They are recruited and trained by IFF according to closely-monitored quality criteria. Quality assurance by monitoring calls is also a part of the standard practice. All interviews are recorded digitally to keep an accurate record of interviews. A minimum of 5% of each interviewers’ calls are reviewed in full by a team leader. Quality control reviews are all documented using a series of scores. Should an interviewer have below acceptable scores, this will be discussed with them along with the issue raised, an action plan agreed and signed, and their work further quality controlled. Information about this is covered in the data collection section of the Survey methodology.[12] Further details are given in the operational survey information section on the contact centre.[13]

CATI operatives utilise an adapted version of the same instrument as online respondents. This allows a further level of data quality checks to be performed, as CATI operatives get similar feedback from the online instrument to online respondents, in addition to having their own quality processes built into the script. This also prevents any ‘clash’ or data problems occurring due to respondent mode switches. One difference is that a ‘back button’ is available to CATI operatives, which allows adjustments to be made, if a respondent wishes to change an earlier answer in the light of a later question. This kind of anecdotal feedback could help identify potential sources of respondent error, and HESA and IFF evaluate feedback from CATI operatives regularly, to determine if instrument improvements could offer marginal enhancements to data collection. While human error is always a potential factor, this is likely to be a matter of random variance in keying errors. There is no evidence to suggest that interviewer error has had any significant impact on the conduct of the survey. Rather, CATI operatives are a useful source of quality improvement suggestions, and regular fortnightly meetings occur where performance and survey issues are discussed, and recommendations logged for further assessment and action.

Next: Paradata


[1] See https://www.hesa.ac.uk/files/Cognitive%20Testing%20Technical%20report.pdf

[2] See https://www.hesa.ac.uk/files/Cognitive%20Testing%20Outcomes%20report.pdf

[3] For online aspects, see: https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/online-survey-design

[4] For telephone and contact centre aspects of the instrument, see https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/telephone-survey-design

[5] For details of how HESA reflects this contradictory information in published outputs, see the XACTIVITY specification at: https://www.hesa.ac.uk/collection/c19072/derived/xactivity

[6] See https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/dissemination

[7] See https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/online-survey-design

[8] See our press release: https://www.hesa.ac.uk/news/14-11-2018/complete-graduate-outcomes-line-up

[9] See OSR’s letter to HESA of 2021-03-18: https://osr.statisticsauthority.gov.uk/correspondence/mark-pont-to-jonathan-waller-higher-education-graduate-outcomes-data/

[10] Readers wishing to understand these issues in detail, and in chronological order, are recommended to read the reviews, which are published at: https://www.hesa.ac.uk/innovation/outcomes/about/progress

[11] See https://www.hesa.ac.uk/blog/05-05-2021/improving-graduate-outcomes-response-rates-why-quality-contact-details-matter

[12] See https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/data-collection

[13] See https://www.hesa.ac.uk/definitions/operational-survey-information#contact-centre-methodology