Skip to main content

Mode effects

A mode effect is a systematic difference that is attributable to the mode of data collection. Analysing the effect of mode on item responses (and aspects of response propensity) is part of our current programme of work.

Mixed-mode surveys are increasingly common. A typical research survey operated in a mixed-mode fashion might survey a sample electronically, and then follow-up with a telephone survey later on, either to provide a more qualitative set of insights into a sub-sample, or to address non-response issues during the initial survey period. There are many possible such designs. The design of the Graduate Outcomes survey was a collaborative exercise that took into account knowledge developed by HESA and the HE sector during the operation of DLHE and LDLHE (Longitudinal Destinations of Leavers from Higher Education), its predecessor surveys. One important factor we took into account was the widely-held perception that telephone surveying from an early stage, combined with online surveying, was likely to be necessary in order to meet user needs for both high response rates and efficiencies generated through an online mode. We therefore sought to retain the best aspects of the previous practices, and these are reflected in making a concurrent mixed-mode design our adopted approach.[1]

Our approach is described in detail in the section of the Survey methodology covering data collection,[2] and in the associated operational survey information.[3] It is underpinned by a single technology solution (Forsta, formerly Confirmit) that links online (mobile and desktop) and telephone-based modes together seamlessly. Survey responses can be saved and picked-up later, in either mode. In practice, this means that respondents may begin the survey in one mode, and end it in another, or even, potentially, change mode several times during the period of time within which they are engaging with the survey. The system logs all events, and these system logs form the basis of HESA’s paradata, including modal information.

One of the key considerations in our quality analysis work is the mode of data collection, which must work to maximise the response rate of the survey whilst also allowing high quality data to be collected. The use of multiple modes can increase representativeness but can also lead to measurement error.[4] For instance, telephone interviews are important in increasing response rates, and therefore reducing non-response, but can also increase measurement error,[5] whereas the use of online self-administered surveys can help to reduce respondent burden and increase the likelihood of a graduate disclosing information that may be viewed as sensitive.[6] Self-administration of a survey also makes it easier for a participant to fully process and understand a question, which can make it a more accessible option and improve the quality of answers. However, it can also be more susceptible to behaviours such as satisficing [7].Other factors may also influence responses, for example research suggests that underreporting of sensitive issues is likely to be lower both when it becomes more socially acceptable and when there is less stigma associated with a topic[8].

Our work considering the potential influence of the mode of completion this year has concentrated in the first instance on responses provided to the activity section and the paid/ voluntary work for an employer section of the survey, as the first part of an ongoing quality review of the survey. We therefore present some of this analysis in the following sections, initially with a focus on the more sensitive questions and subsequently, some of the other questions from the activity and employment sections of the survey.  However, in summary, differences are visible in some data items depending on the completion mode utilised by the graduate and the type of question being answered. Job title and duties had higher levels of item non-response in the online mode, whereas salary had higher item non-response in the CATI completion mode. This is more likely to be the case with questions that may be perceived as sensitive, depending on the mode being utilised. Indeed, questions which are less sensitive such as the multiple jobs questions or the employment basis question tend to have much more similar levels of item non-response across modes. We have furthered the analysis of completion mode, but as with previous years, data quality could benefit from the continuation of  analysis considering primacy and recency effects and the influence of the mode of completion.[9] Equally, mode analysis could benefit from the inclusion of characteristic data, to check whether effects are influenced by the characteristics of the graduates completing on a certain mode. This will be particularly relevant due to the methodological changes that were made to the way different modes of data collection are used in the survey. These steps will form part of the continual monitoring and improvement of the survey data.

Next: Reliability of sensitive data


[1] For completeness, we must explain that a separate, paper-based approach is used in a minority of cases where respondents are known not to have access to a telephone or computer. This mode asks the mandatory questions required for a complete response. Only 25 postal responses were received during the first year of surveying. Because these responses are so few, we do not discuss the paper-based mode very much in this report.

[2] See https://www.hesa.ac.uk/data-and-analysis/graduates/methodology/data-collection

[3] See https://www.hesa.ac.uk/definitions/operational-survey-information

[4] (Kocar and Biddle, 2020)

[5] (Chang and Krosnick, 2010)

[6] (Brown et al., 2008)

[7] (AAPOR, 2010)

[8] (McNeeley, 2012)

[9] (Chang and Krosnick, 2010; Kocar and Biddle, 2020)