Skip to main content

Mode effects

A mode effect is a systematic difference that is attributable to the mode of data collection. Analysing the effect of mode on item responses (and aspects of response propensity) is part of our current programme of work.

Mixed-mode surveys are increasingly common. A typical research survey operated in a mixed-mode fashion might survey a sample electronically, and then follow-up with a telephone survey later on, either to provide a more qualitative set of insights into a sub-sample, or to address non-response issues during the initial survey period. There are many possible such designs. The design of the Graduate Outcomes survey was a collaborative exercise that took into account knowledge developed by HESA and the HE sector during the operation of DLHE and LDLHE (Longitudinal Destinations of Leavers from Higher Education), its predecessor surveys. One important factor we took into account was the widely-held perception that telephone surveying from an early stage, combined with online surveying, was likely to be necessary in order to meet user needs for both high response rates and efficiencies generated through an online mode. We therefore sought to retain the best aspects of the previous practices, and these are reflected in making a concurrent mixed-mode design our adopted approach.[1]

Our approach is described in detail in the section of the Survey methodology covering data collection,[2] and in the associated operational survey information.[3] It is underpinned by a single technology solution (Confirmit) that links online (mobile and desktop) and telephone-based modes together seamlessly. Survey responses can be saved and picked-up later, in either mode. In practice, this means that respondents may begin the survey in one mode, and end it in another, or even, potentially, change mode several times during the period of time within which they are engaging with the survey. The system logs all events, and these system logs form the basis of HESA’s paradata, including modal information. The paradata, which also includes timing and duration information, is very rich, but also requires some complex scripting to access, and, as we learn more about the capabilities of this system, we are extending the catalogue of paradata we wish to extract from the system. This system-generated logging data is, in its own way, as rich as the collected survey data itself, and offers us insights into the behavioural characteristics of respondents. When combined with our data on the population characteristics, it also yields potential insights into non-respondents. Our initial task has therefore been to define more precisely the characteristics of the various survey engagement modes.

Our current paradata dictionary includes variables for the start mode, partial completion mode, completion mode, various status markers, last question viewed, number of calls made, and a range of variables relating to the sending of emails and SMS messages.[4] Throughout the second year of operations we have been using some of this paradata to inform our data collection processes such as identifying the most suitable time for sending emails and SMSs based on completion times, changing subject lines to encourage higher email open and click rates, monitoring interviewer performance using average number of calls, to name a few.

However, we are aware of the additional potential hidden within the various markers in the system, which could yield additional formally catalogued paradata. We are keen to use it to support operational improvements, as well as to investigate mode effects. A paradata team has been formed with a remit to develop and catalogue paradata, and we have been engaging with microdata users about the paradata variables they would find most useful. We have steadily increased the number of paradata variables available and developed a routine to process these into our data warehouse. We have evaluated data quality for some variables and are endeavouring to improve the specification of them to a point where they become useful for case prioritization. Our work has helped us develop deeper understanding of respondent behaviours and characteristics, and non-respondent characteristics.

One of the key considerations in our quality analysis work is therefore the mode of data collection, which must work to maximise the response rate of the survey whilst also allowing high quality data to be collected. The use of multiple modes can increase representativeness but can also lead to measurement error.[5] For instance, telephone interviews are important in increasing response rates, and therefore reducing non-response, but can also increase measurement error,[6] whereas the use of online self-administered surveys can help to reduce respondent burden and increase the likelihood of a graduate disclosing information that may be viewed as sensitive.[7] Self-administration of a survey also makes it easier for a participant to fully process and understand a question, which can make it a more accessible option and improve the quality of answers. However, it can also be more susceptible to behaviours such as satisficing.[8]

As has been briefly highlighted above, there can be many different issues with the quality of survey data and the completion mode utilised by a respondent can exacerbate these. Therefore, other forms of bias and error must be considered, especially when they may be influenced by the mode of completion. For instance, selection bias is likely to be present in the survey regardless of mode. However, the two concepts are closely intertwined in many ways and mode can be used to help increase the representativeness of the data, for example through the case prioritisation process that occurs in Graduate Outcomes. Equally, confidentiality impacts social desirability bias, as bias is more likely if respondents are identified in a survey, which is relevant to Graduate Outcomes, and this effect is likely to be more pronounced if an interviewer is administering a survey[9] as is the case with the telephone interviews. Throughout our work, analysis of various elements took place to determine any changes in response patterns or question non-response. For each area, the possible reasons for changes were assessed and consideration was made to factors such as the cognitive load placed upon participants, the potential for misinterpretation, social desirability bias, satisficing and the potential for primacy and recency effects, among other things, to help describe some of the patterns in the data. These can all be influenced by mode in different ways that we may not expect. For instance, in terms of primacy and recency effects, telephone interview respondents are more likely to provide the answer that they heard last, whereas online self-completion respondents are more likely to select the first option.[10]

Our work on mode effects has concentrated in the first instance on particularly sensitive data items: subjective wellbeing, salary, and location. We therefore present most of this analysis in the following section. However, in summary, modal effects seem to have reduced between years for anxiety, which could perhaps be an indication of a reduction in confusion in the online mode, but could also be caused by other factors. However, there is a bigger disparity in positively worded questions, which could be influenced by social desirability bias and pandemic effects. Anxiety levels have risen for telephone interviews. This is possibly because people now see it as more socially acceptable to provide higher ratings to the interviewers due to the Covid-19 pandemic; whereas in the past people may have felt more comfortable providing a higher anxiety rating online than through a telephone interview. Research suggests that underreporting of sensitive issues is likely to be lower both when it becomes more socially acceptable and when there is less stigma associated with a topic.[11]

Our current view is that data quality could benefit from some further completion mode analysis considering primacy and recency effects and the influence of the mode of completion.[12] Equally, mode analysis could benefit from the inclusion of characteristic data, to check whether effects are influenced by the characteristics of the graduates completing on a certain mode. This will be particularly relevant if, for example, significant methodological changes are made to the way different modes of data collection are used in the survey. These steps will form part of the continual monitoring and improvement of the survey data in future.

Next: Reliability of sensitive data


[1] For completeness, we must explain that a separate, paper-based approach is used in a minority of cases where respondents are known not to have access to a telephone or computer. This mode asks the mandatory questions required for a complete response. Only 25 postal responses were received during the first year of surveying. Because these responses are so few, we do not discuss the paper-based mode very much in this report.

[4] This remains an unpublished internal document at the time of writing.

[5] (Kocar and Biddle, 2020)

[6] (Chang and Krosnick, 2010)

[7] (Brown et al., 2008)

[8] (AAPOR, 2010)

[9] (Kocar and Biddle, 2020)

[10] (Chang and Krosnick, 2010; Kocar and Biddle, 2020).

[11] (McNeeley, 2012)

[12] (Chang and Krosnick, 2010; Kocar and Biddle, 2020)