Skip to main content

Subjective wellbeing data

Subjective well-being is a self-reported measure which provides an indication of a person’s overall feelings at a particular time,[1] and can therefore be considered as a useful metric in building a wider understanding of a graduate’s life. The subjective wellbeing section of the Graduate Outcomes survey features four questions set by the Office for National Statistics (ONS) and offers a standardised measure of wellbeing across years and surveys. The first three questions cover life satisfaction, life worth and happiness and are all positively worded with a scale running from 0-10, where 0 indicates negative feelings. Conversely, the final question on anxiety is a negatively worded item, where 0 indicates a more positive outcome.

The previous edition of this report contains details of our initial investigations into SWB variables in more detail. This edition summarises these before offering new insights from our quality analysis.

An assessment of year one Graduate Outcomes data concluded that due to the reverse coding of the anxiety question, which follows on from the first three positively worded questions, there was potentially some misinterpretation of the scales occurring. Indeed, both assumptions about the intent of a survey and a reverse keyed item following on from questions using a scale in the opposite direction, can lead to increased mis-response to reversed items[2] and previous research has found that the final ONS well-being question does have the potential to confuse respondents.[3] Therefore, to reduce the risk of both confusion and straight-lining some changes were implemented in year two, including the removal of the additional wording clarifying the scale for the first three questions. It was hoped this alteration would make the additional wording stand out for the anxiety question by highlighting the change in direction of coding. It also brought the questions more in line with other surveys utlising the ONS harmonised questions. The impact of these changes will be assessed in this report.

Further to this, quality work last year featured some analysis of subjective well-being by completion mode. The following report includes assessment of the effect of survey mode on subjective well-being data, which was also referenced as the next step for research in the 2020 Graduate Outcomes survey quality report. This is an important line of research that can impact all areas of the subjective well-being questions. Traditionally, the ONS subjective well-being questions have not been administered through an online mode, therefore it has been even more important to assess the questions to ensure that they are appropriate in terms of layout and wording. Online surveys have been found to increase the speeds at which participants answer questions, and grid questions can encourage lower levels of concentration and the straight-lining of responses.[4] Equally, social desirability bias can come into play for sensitive questions and for questions administered by an interviewer.[5] These effects will be considered in the following analysis.

Straight-lining in SWB questions

There are numerous reasons a graduate may straight-line, including misinterpretation, satisficing or survey fatigue. It can occur in different forms, but this analysis will be based upon the non-differentiation of responses in the scores provided to the subjective wellbeing questions. Straight-lining can be quite evident in some areas of the Graduate Outcomes survey due to the reverse wording of the final subjective-wellbeing question, however, responses may still be valid where it has occurred. It can be harder to spot an issue if a large proportion of the population has scored towards the midpoint of the scale as this can lead to higher levels of valid straight-lining, but in general it is considered to be one of the most important indicators of survey data quality.[6]

The following table highlights the levels of straight-lining in the Graduate Outcomes survey by mode and year.

Table 13 Levels of straight-lining in Graduate Outcomes in Cohort D of year one and year two

Completion mode   Year one % Year two % % difference
CATI Not straight-lined 99.21% 99.24% 0.03%
CATI Straight-lined 0.79% 0.76% -0.03%
Online Not straight-lined 97.16% 96.84% -0.31%
Online Straight-lined 2.84% 3.16% 0.31%

The table above highlights that straight-lining has decreased for telephone interviews but appears to have increased slightly for the online completions in year two. However, the numbers are still particularly low given the overall size of the population.

To delve further into the results illustrated above, the figure below indicates the numbers of graduates straight-lining for both years split by mode. Due to the overall increase in response rate in year two the numbers are higher, but this offers an insight into the distribution of straight-lining responses.

Chart showing the number of respondents who straight-lined each score within the subjective wellbeing questions. Trends are described in the following page text.

Figure 2 An illustration of straight-lining responses across cohort D of year one and two of the survey, split by mode


As can be seen, there are higher levels of straight-lining for certain scores in the Figure above, particularly the scores ‘5’,’7’,’8’ and ‘10’. However, though useful, this does not explain whether higher numbers of graduates are selecting these scores overall, regardless of straight-lining. To aid in understanding straight-lining a little better, the following Figure illustrates straight-lining levels when corrected by the common scores provided to the subjective well-being questions. The percentage of responses received for each score across Cohort D is used to calculate the expected distribution of straight-lining at each particular score, and the difference between the expected and actual values are plotted below. This helps to determine if straight-lining is higher than expected based on the normal distribution of scores provided.

Chart showing the difference between expected and actual levels of straightlining. Trends are described in the following page text.

Figure 3 Straight-lining in cohort D of year one and two, corrected by the frequency of scores selected for the subjective well-being questions 


As can be seen, straight-lining is still particularly high at points ‘5’ and ‘10’, which are scores that would be more commonly expected for non-differentiation of responses. Indeed, when considering a potential valid straight-lining response, ‘5’ is the value we would most expect to be selected and score of ‘10’ may be more likely to be selected due to satisficing and an assumption of scale. However, the Figure above also illustrates that the peaks seen in the preceeding Figure at scores of ‘7’ and ‘8’ are not as concerning as they may have initially seemed, as these are much more commonly selected responses, and therefore there is naturally a higher chance for straight-lining to occur. This also indicates that straight lining has decreased towards the higher scores for the online mode.

As mentioned, straight-lining is often viewed as one of the most important indicators of survey data quality[7] and the removal of straight-lining can also be a useful tool in determining if there is systemic confusion present in the survey, or whether the effects are a result of satisficing or similar effects. This will be revisited later in the Misinterpretation of the subjective well-being questions section, which focuses on potential misinterpretation of the wellbeing questions.

Mode effects in SWB questions

Graduate Outcomes’ mixed-mode methodology can help to increase response rates and reach a wider range of respondents, thereby improving the representativeness of the dataset. However, mode effects can have an impact on the data and assessing results by completion mode is important, especially as the ONS subjective wellbeing questions are not traditionally administered through online surveying. Social desirability bias is one example of a mode effect that can impact the response received to a question, especially if a survey is administered by an interviewer.[8] This effect is likely to be more pronounced for sensitive or personal questions,[9] and subjective well-being can certainly be considered within this group, highlighting the relevance of quality analysis on this section of the survey in particular.

Percentage distributions of the subjective wellbeing scores across each year and for each of the SWB questions, are shown in the following Figures. These are split by completion mode to illustrate differences between years and to begin to highlight possible mode effects.

 

Column chart shows life satisfaction scores for Year 1 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 4 Distribution of Year 1 (Cohort D) Life Satisfaction scores by completion mode


Column chart shows life satisfaction scores for Year 2 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 5 Distribution of Year 2 (Cohort D) Life Satisfaction scores by completion mode


Column chart shows life worthwhile scores for Year 1 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 6 Distribution of Year 1 (Cohort D) Life Worthwhile scores by completion mode


Column chart shows life worthwhile scores for Year 2 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 7 Distribution of Year 2 (Cohort D) Life Satisfaction scores by completion mode


Column chart shows happiness scores for Year 1 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 8 Distribution of Year 1 (Cohort D) Happiness scores by completion mode


Column chart shows happiness scores for Year 2 by completion mode. Higher scores were given over the phone (CATI) than online.

Figure 9 Distribution of Year 2 (Cohort D) Happiness scores by completion mode


Column chart shows anxiety scores for Year 1 by completion mode. Lower scores were given over the phone (CATI) than online. The proportion of CATI responses answering zero was more than double the proportion of online responses answering zero.

Figure 10 Distribution of Year 1 (Cohort D) Anxiety scores by completion mode


Column chart shows anxiety scores for Year 2 by completion mode. Lower scores were given over the phone (CATI) than online. The proportion of responses answering zero was lower than in year 1, but still much higher for CATI responses than online responses

Figure 11 Distribution of Year 2 (Cohort D) Anxiety scores by completion mode


Between years, patterns in the percentage distributions seem similar for each subjective well-being question. Equally, all positively worded questions seem to follow similar trends in distribution. For anxiety, the biggest mode effect can be seen for a score of 0, which is selected far more for telephone interviews than online. It seems likely that this is a result of social desirability bias. Indeed, this seems to have impacted each question, with higher percentages of graduates selecting positive outcomes when completing through a telephone interview and more selecting negative outcomes when completing online.

Linear regression can be utilised to analyse the relationships between the completion mode used and the response provided to the subjective wellbeing questions and can help describe the differences shown in the figures above. Using completion mode as the explanatory variable and response to subjective wellbeing as a dependent variable illustrates the interactions between them. Completion mode is a categorical variable, therefore a dummy variable was created in R, where telephone was 1 and online was 0. This is an arbitrary selection but is considered in the interpretation of the coefficients. Intercept values were calculated for telephone interviews and the difference that would be expected if a graduate had completed online can be seen in the following two tables from the +/- Online value.

Table 14 Cohort D, year one telephone interview intercept values and expected difference for online completion mode

  Anxiety Happiness Satisfaction Worthwhile
Intercept (Telephone) 3.41 7.52 7.60 7.84
Online +1.09 -0.63 -0.57 -0.57

Table 15 Cohort D, year two telephone intercept values and expected difference for online completion mode

  Anxiety Happiness Satisfaction Worthwhile
Intercept (Telephone) 3.62 7.41 7.42 7.76
Online +0.82 -0.74 -0.67 -0.73

The tables above highlight the impact of completion mode on each of the subjective wellbeing responses. The ‘online’ amount highlights the difference that would have been expected if the graduates had completed online, compared to the average (intercept) value assigned to telephone interviews, illustrating possible mode effects at play. These results suggest that if a graduate has completed the survey online, they are more likely to score themselves as having higher anxiety levels and lower happiness, satisfaction and worthwhile levels. This is backed by existing research, such as a study by Dolan and Kavestos (2016) which finds that telephone interviews are generally associated with significantly higher levels of wellbeing. Of the positively worded questions, happiness seems to be most affected for Graduate Outcomes, with a drop of 0.69 expected for completion in the online mode.

Misinterpretation of the subjective well-being questions

Highlighted below are the average positively worded ratings in relation to high anxiety scores. Generally, a recoded score for a negatively worded question is expected to be similar to the average of a participants positively worded question answers. If it is not, it may be an indication of confusion.[10] The following table highlights the average of the positively worded ratings provided when graduates have selected high anxiety scores.

Table 16 Difference between the average positively worded question rating provided in cohort D of the survey for year one and year two for graduates who selected high anxiety scores

Average positively worded rating
Anxiety rating Year 1 (Cohort D) Year 2 (Cohort D) Difference
8-10 6.44 6.09 -0.35
9-10 6.22 5.73 -0.49
10 5.91 5.32 -0.59

As can be seen in the table above, the average of the positively worded ratings when anxiety is high has dropped in year two. This is a positive indication of a possible reduction in confusion, highlighting that the participants are likely to be understanding the question better. Equally, the reduction for each of the high anxiety groups is more than the drop in the overall average positivity rating between cohort D in year one and two when all questions were answered, which dropped from 7.43 to 7.25.

As was mentioned in the Straight-lining in SWB questions section, which focused on straight-lining across the dataset, the removal of straight-lining participants can help to determine if there is genuine misinterpretation of the questions, or whether remaining confusion may be a result of satisficing or similar effects. When straight-liners were removed from the dataset, the average positivity rating was very similar to the ratings above, both for the overall average and for the high anxiety averages. Therefore, although removing straight-lining responses does bring the values slightly closer to the expected average, potential misinterpretation levels have reduced and the changes between years are very similar.

Indeed, comparisons of the positively worded questions and anxiety can start to indicate confusion. However, a more direct comparison is happiness and anxiety, which can be considered to be contrary statements, where both are unlikely to be true (e.g. extremely anxious and extremely happy) but both can be false (e.g. not happy and not anxious).[11] The following three tables highlight the overall distribution of happiness and anxiety scores provided by graduates, first overall and then split by completion mode. They only include graduates who have respondent to both the happiness and anxiety questions. The tables have been reordered so that anxiety scores begin with ‘high anxiety’, which are scores of 6-10.

Table 17 Overall percentage happiness and anxiety scores and differences between year one and year two of the survey

Year 1 Happiness score
Anxiety scores   Low Medium High Very high
High 5.81% 8.12% 11.30% 4.88%
Medium 1.68% 5.30% 9.85% 3.54%
Low 1.28% 3.24% 12.76% 6.80%
Very low 0.95% 1.95% 8.69% 13.85%
Year 2 Happiness score
Anxiety scores   Low Medium High Very high
High 6.89% 9.09% 11.62% 4.02%
Medium 1.74% 5.94% 10.34% 2.93%
Low 1.24% 3.42% 13.10% 6.09%
Very low 0.82% 1.85% 8.52% 12.38%
Difference Happiness score
Anxiety scores   Low Medium High Very high
High 1.08% 0.97% 0.33% -0.86%
Medium 0.06% 0.64% 0.49% -0.60%
Low -0.04% 0.18% 0.33% -0.17%
Very low -0.12% -0.11% -0.17% -1.47%

Table 18 Percentage happiness and anxiety scores for telephone interviews and differences between year one and year two of the survey

Year 1 Happiness score
Anxiety scores   Low Medium High Very high
High 3.74% 6.59% 10.20% 4.30%
Medium 1.39% 4.90% 10.26% 3.43%
Low 0.99% 3.17% 13.70% 7.05%
Very low 0.78% 2.13% 10.49% 16.88%
Year 2 Happiness score
Anxiety scores   Low Medium High Very high
High 4.17% 7.65% 11.23% 4.03%
Medium 1.31% 5.40% 11.23% 3.28%
Low 0.94% 3.25% 13.68% 6.70%
Very low 0.74% 2.00% 9.66% 14.75%
Difference Happiness score
Anxiety scores   Low Medium High Very high
High 0.40% 1.06% 1.02% -0.27%
Medium -0.07% 0.50% 0.97% -0.14%
Low -0.05% 0.08% -0.01% -0.35%
Very low -0.04% -0.14% -0.83% -2.13%

Table 19 Percentage happiness and anxiety scores for online completion and differences between year one and year two of the survey

Year 1 Happiness score
Anxiety scores   Low Medium High Very high
High 9.06% 10.52% 13.02% 5.80%
Medium 2.14% 5.92% 9.22% 3.71%
Low 1.74% 3.34% 11.30% 6.41%
Very low 1.22% 1.67% 5.86% 9.07%
Year 2 Happiness score
Anxiety scores   Low Medium High Very high
High 11.15% 11.31% 12.23% 4.02%
Medium 2.41% 6.76% 8.97% 2.39%
Low 1.70% 3.68% 12.20% 5.15%
Very low 0.95% 1.61% 6.77% 8.70%
Difference Happiness score
Anxiety scores   Low Medium High Very high
High 2.09% 0.79% -0.78% -1.78%
Medium 0.26% 0.84% -0.25% -1.32%
Low -0.04% 0.34% 0.90% -1.26%
Very low -0.27% -0.06% 0.91% -0.37%

The majority of graduates would be expected to fall within the diagonal strips outlined in the tables above. In general, this is what is observed, although there are some areas that do not fit this trend. The tables highlight that low happiness, high anxiety responses are much more common in the online mode. There are also fewer graduates across the very high happiness groups. As mentioned in the Subjective wellbeing data section, the changes that were made to the questions in year two aimed to reduce the levels of potential misinterpretation. There was a larger reduction in the high happiness and very high anxiety levels than any other combination for the online mode, which is a positive outcome considering that the layout and question alterations were implemented predominantly to improve responses in the online mode. There has also been a reduction in these groups for telephone interviews, which saw similar changes to the CATI script.

Happiness and anxiety are not fully antonymous, but research has found that high anxiety levels have a negative correlation with happiness levels.[12] Although participants could genuinely score towards one end of the scale for both, it is less likely towards the high end of the scale as happiness and anxiety can be considered as a contrary statement, as mentioned previously. As a result, more reliably determining the presence of misinterpretation requires analysis to focus on the top end of the scale. The following tables focus on high anxiety and high happiness ratings, grouping from ‘7-10’ for both questions to just scores of ‘10’ for both. They label any graduates outside the specified ratings as ‘understood’ and any within as ‘misunderstood’.

Table 20 Happiness and anxiety scores both within 7-10, split by mode for cohort D of year one and two

Happy/Anxiety
Scores Seven to Ten
Completion Mode Y1 % Y2 % Difference
Misunderstood CATI 9.44% 9.70% 0.26%
Misunderstood Online 13.09% 10.38% -2.71%
Understood CATI 90.56% 90.30% -0.26%
Understood Online 86.91% 89.62% 2.71%

Table 21 Happiness and anxiety scores both within 8-10, split by mode for cohort D of year one and two

Happy/Anxiety
Scores Eight to Ten
Completion Mode Y1 % Y2 % Difference
Misunderstood CATI 3.24% 3.14% -0.10%
Misunderstood Online 5.28% 3.56% -1.72%
Understood CATI 96.76% 96.86% 0.10%
Understood Online 94.72% 96.44% 1.72%

 

Table 22 Happiness and anxiety scores both within 9-10, split by mode for cohort D of year one and two

Happy/Anxiety
Scores Nine to Ten
Completion Mode Y1 % Y2 % Difference
Misunderstood CATI 0.83% 0.76% -0.07%
Misunderstood Online 1.80% 1.12% -0.68%
Understood CATI 99.17% 99.24% 0.07%
Understood Online 98.20% 98.88% 0.68%

 

Table 23 Happiness and anxiety scores both 10, split by mode for cohort D of year one and two

Happy/Anxiety
Scores Ten
Completion Mode Y1 % Y2 % Difference
Misunderstood CATI 0.30% 0.28% -0.02%
Misunderstood Online 0.89% 0.63% -0.26%
Understood CATI 99.70% 99.72% 0.02%
Understood Online 99.11% 99.37% 0.26%

All levels of scoring have seen a reduction in the level of possible misinterpretation, apart from the ‘7-10’ telephone (CATI) group. The online group has seen significant reductions across all levels. Levels within the misunderstood groups obviously drop as the group becomes larger, but the more interesting groups are towards the higher scores. Across all levels, online has higher levels of misunderstanding in both years, but has also seen the biggest improvement across both modes.

Conclusions on SWB data quality

Overall, it seems that the quality of subjective well-being data has improved or remained at a high standard. Straight-lining was particularly low given the overall size of the population regardless of the changes and did decrease overall on CATI, although the overall percentage did not increase for online. However, the levels of straight-lining did decrease for the online mode towards the higher scores, which begins to indicate that the aim of decreasing misinterpretation towards the top end of the scale in particular was successful and is a positive step. Additionally, the survey improvements made in year two were not expected to particularly reduce straight-lining, especially for areas where it is valid such as towards the middle or potentially the bottom end of the scale, as low anxiety levels are less likely to correlate to high positivity levels than the other way around.[13]

Modal effects seem to have reduced between years for anxiety, which could perhaps be an indication of a reduction in confusion in the online mode but could also be caused by other factors. However, there is a bigger disparity in positively worded questions, which could be influenced by social desirability bias and pandemic effects. Anxiety levels have risen for telephone interviews. This is possibly due to people now seeing it as more socially acceptable to provide higher ratings to the interviewers due to the Covid-19 pandemic; whereas in the past people may have felt more comfortable providing a higher anxiety rating online than through a telephone interview. Research suggests that underreporting of sensitive issues is likely to be lower both when it becomes more socially acceptable and when there is less stigma associated with a topic.[14]

Results of the misinterpretation analysis are incredibly positive, as the pandemic may have been expected to have had the opposite impact due to increasing anxiety levels. Overall, the level of apparent misinterpretation has decreased. Across both years, online has higher levels of apparent misinterpretation than telephone interviews, but the online mode has also seen the biggest improvement in understanding. Potential misinterpretation decreased across all levels of scoring apart from for scores of 7-10 in the CATI group. The survey changes were more likely to impact the online mode, which sees significant reductions across all groups. The tables highlight that low happiness, high anxiety responses are much more common in the online mode, which is likely to be a result of social desirability bias, as people may not be as comfortable sharing this sensitive information to a telephone interviewer. There are also fewer graduates across the very high happiness groups, perhaps as a result of the pandemic or other related factors.

Next: Location data


[1] (Dolan et al., 2008)

[2] (Weijters et al., 2013)

[3] (Ralph, Palmer and Olney, 2011)

[4] (DeLeeuw, 2018)

[5] (Duffy et al., 2005)

[6] (Reuning and Plutzer, 2020)

[7] (Reuning and Plutzer, 2020)

[8] (Kocar and Biddle, 2020)

[9] (Duffy et al., 2005)

[10] (Józsa and Morgan, 2017)

[11] (Horn, 2018)

[12] (Arab et al., 2016)

[13] (Arab et al., 2016)

[14] (McNeeley, 2012)