Graduate voice data
Introduction and context
The graduate voice question set comes after graduates have responded to questions about their current or previous activity and are followed by the subjective well-being and opt-in banks in the survey. They were designed to understand more about the views of the graduate in relation to their current activity and as a more ‘eudemonic’ measure of their well-being, more detail on which can be found in the Graduate Outcomes survey methodology. The addition of these questions offers an additional important measure of graduate success, as well as a valuable additional measure of graduate wellbeing, as some research has found that eudemonic measures are more associated with self-reported measures of wellbeing than hedonic measures (McMahan and Estes, 2011). Depending on the responses provided by the graduate earlier in the survey, the questions will be based around their work, study, or activities (Figure 2). Responses are not mandatory and are selected from a Likert-type scale with five options, ranging from ‘strongly disagree’ to ‘strongly agree’.
The three graduate voice questions, which are asked differently depending on the activities selected by the graduate, are:
- My current work fits with my future plans?
- My current work is meaningful?
- I am utilising what I learnt during my studies in my current work?
- My current study fits with my future plans?
- My current study is meaningful?
- I am utilising what I learnt during my studies in my current study?
- My current activity/activities fits with my future plans?
- My current activity/activities is meaningful?
- I am utilising what I learnt during my studies in my current activity/activities?
There were two changes made to the graduate voice questions in year 3 of the survey, designed to improve the experience of respondents in the online mode. Initially, the layout of the questions was optimised for those completing on a mobile phone in cohort A, to make the questions easier to see on the screen and reduce the amount of scrolling required to select answers. In cohort C the questions were altered from a grid layout to a carousel, illustrated in Figure 3. The carousel shows one question at a time, auto-advancing to the next question once an answer has been selected, and they can be found to reduce satisficing responses and reduce response differences between modes (DeLeeuw, 2018). It is worth noting that the carousel can appear slightly differently depending on the completion mode and device used by the graduate, for example Figure 4 highlights how the carousel may appear on some mobile devices.
Figure 2: Graduate Voice questions in the grid layout used for Cohorts A and B in year 3
Figure 4: Graduate Voice questions in the auto-advance carousel layout used for Cohorts C and D in year 3 in compact format displayed on a mobile phone.
Due to the nature of the changes made to these questions, much of the analysis will focus on the online platform, although comparisons will be made to respondents who answered via CATI in some cases.
Methods and results
Item non-response on mobile
As a result of the optimisation of the graduate voice questions on mobile in cohort A of year three, and the introduction of the carousel in cohort C, an assessment of non-response to the three questions will be useful to determine the impact of the changes. Initially, Table 16 considers the number of questions in the block that were answered by graduates that completed the mandatory survey questions, some of these graduates may not have reached the graduate voice questions and dropped out earlier. It includes details of non-response to the section, and to individual questions within the graduate voice block.
Table 21: Difference between the proportion of certain responses to the Graduate Voice questions, for graduates who completed all mandatory survey questions on the mobile completion mode by cohort and year.
|
Cohort A |
Cohort B |
Cohort C |
Cohort D |
||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
All |
91.38% |
93.97% |
2.59% |
89.72% |
93.13% |
3.41% |
92.36% |
92.33% |
-0.03% |
93.73% |
92.61% |
-1.12% |
1st only |
0.32% |
0.09% |
-0.23% |
0.49% |
0.15% |
-0.34% |
0.30% |
1.19% |
0.89% |
0.08% |
1.15% |
1.07% |
1st & 2nd only |
2.06% |
0.05% |
-2.00% |
2.65% |
0.13% |
-2.52% |
0.15% |
0.60% |
0.45% |
0.03% |
0.43% |
0.40% |
None |
5.88% |
5.71% |
-0.17% |
6.52% |
6.10% |
-0.42% |
6.96% |
5.73% |
-1.23% |
5.94% |
5.76% |
-0.18% |
As Table 16 indicates, there are not only differences in item non-response within the graduate voice block between years, but also between cohorts. Cohorts A and B have seen a rise in mobile completion graduates responding to all of the questions between year two and three following the optimisation of the questions on mobile, with a particular reduction in graudates just responding to the first and second question. However, cohort C and D did not see a rise, although they both already had a higher proportion of graudates responding to all questions in year 2. Equally, the percentage of graduates responding to none of the questions has reduced across all cohorts, with the largest reduction in cohort C after the implementation of the carousel. Whilst this appears reassuring, both cohort C and D have seen a rise in the number of graduates responding to only the first and second question, perhaps indicating that the carousel is causing a number of graduates to skip questions as a result of them no longer being on the page. As the assessment considers graduates who have completed the mandatory questions, there may be graduates included in the analysis in the ‘none answered’ group who had already dropped out at earlier questions, but it is useful to consider all graudates who may have seen the questions initially. To understand the behaviours of these graduates further, analysis in Figure 5 highlights the responses to the question blocks before and after the Graduate Voice questions for graduates who did not answer any of the three questions, with the aim of delving into the behaviours of the non-responding graduates a little further. Note that all three blocks, Graduate Voice, the one preceding it and the one that follows are optional in the sense that respondents can skip them without answering even a single question.
Figure 5: Responses provided to the first question in block F or at least one question in H block for graduates who provided no answers to the Graduate Voice block (G) on the mobile completion mode.
Whilst around half of the graduates who did not respond to any of the Graduate Voice questions in section G did respond to the first question in block F, the levels of graduates who responded to just block H or responded to the block before and after the Graduate Voice questions are very low. Equally, the proportion of graduates who responded to the block before and after appears to reduce even further after the introduction of the carousel. As these graduates have answered questions in the blocks before and after section G it may be that they are more likely to respond than someone who has already shown instances of item non-response to other questions, and they may have felt more encouraged after the introduction of the new layout. The increase in graduates who did not answer either block further highlights that those graduates who did reach the section, or have been shown to answer optional questions previously, were perhaps more encouraged to respond after the change and item non-response is more likely to be resulting from graduates who are reluctant to respond to multiple questions or have already dropped out. Whilst some of these graduates may not have seen the questions, it is useful to understand these patterns for all graduates who had the potential to respond. However, in order to see if patterns change when graduates are more likely to have seen the question, Table 17 highlights responses received when the first question in block F, which precedes the Graduate Voice questions, is answered.
Table 22: Difference between the proportion of certain responses received to the Graduate Voice questions block, for graduates who answered the first question of the block before Graduate Voice on the mobile completion mode by cohort and year.
|
Cohort A |
Cohort B |
Cohort C |
Cohort D |
||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
All |
93.72% |
96.30% |
2.58% |
92.45% |
95.20% |
2.75% |
95.55% |
94.81% |
-0.74% |
96.30% |
95.32% |
-0.98% |
1st only |
0.32% |
0.10% |
-0.22% |
0.50% |
0.15% |
-0.35% |
0.31% |
1.21% |
0.89% |
0.08% |
1.17% |
1.09% |
1st & 2nd only |
2.11% |
0.05% |
-2.06% |
2.74% |
0.13% |
-2.61% |
0.16% |
0.62% |
0.46% |
0.03% |
0.44% |
0.41% |
None |
3.48% |
3.37% |
-0.11% |
3.71% |
4.03% |
0.32% |
3.75% |
3.22% |
-0.53% |
3.37% |
3.01% |
-0.36% |
There are a few differences that are noticeable when comparing the previously discussed Table 16, which highlights the question response levels to Graduate Voice for graduates who answered all mandatory questions, and Table 17, which only considers graduates who answered the first question of the block preceding the Graduate Voice questions. The main difference to note is that there is a larger percentage of graduates who have answered all questions in year three when the block before Graduate Voice is answered, for example with 95.32% of graduates answering all of the Graduate Voice questions when they answered the first question in block F in cohort D, compared to 92.61% when considering all graduates who completed the mandatory questions in cohort D. There has also been a proportional drop in the percentage of graduates who answered none of the questions. Crucially, however, Table 16 highlights similar findings to Table 17, which indicates that whilst there are differences between the cohorts which may influence findings, the optimisation on mobile appears to have potentially increased the percentage of graduates answering all questions, whereas the introduction of the carousel may have reduced it. Conversely, full item non-response to all the Graduate Voice questions has reduced most for cohorts C and D after the carousel was introduced, perhaps indicating that the carousel encouraged some more interaction with the block.
For all these results, differences are not large and could be easily influenced by other factors. It is also important to determine if the introduction of the carousel has had any impact on data quality before considering actions to take.
Straight-lining
Straight-lining is a valuable indicator of poor data quality, especially for grid questions in a survey, and can indicate survey behaviours such as satisficing (Schonlau and Toepoel, 2015). As a result, it could be useful in assessing the impact of the introduction of the carousel to replace the previous grid layout of the Graduate Voice questions. Carousels can not only reduce straight-lining levels when compared to questions asked in a grid format but can also lessen response differences between survey modes (DeLeeuw, 2018). Whilst straight-lining can indicate poor quality, it is important to note that it can also be a valid response in some cases (Reuning and Plutzer, 2020). This means it is important to consider other aspects of survey data quality and to utilise evidence of other behaviours, such as indications of reluctance to respond, to aid in identifying satisficing versus valid responses (Cole, McCormick and Gonyea, 2012).
Levels of straight-lining by work type
Comparing overall levels of straight-lining before and after changes were made to the presentation of the Graduate Voice questions will aid in determining the potential impact on responses. As the questions can be asked in a few different ways, the figures in Table 18 show the levels of straight-lining split by work type for online graduates in year two and year three by cohort, for respondents who answered all three questions. This will allow comparisons of the impact of the optimisation on mobile in cohort A of year three and the change to a carousel from a grid-layout in cohort C of year three.
Table 23: Levels of straight-lining in the graduate voice question block in the online mode when all three questions have been answered, split by block asked dependent on routing
Work |
|||
---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Cohort A |
38.27% |
39.92% |
1.65% |
Cohort B |
42.56% |
46.22% |
3.66% |
Cohort C |
49.54% |
48.10% |
-1.44% |
Cohort D |
38.95% |
35.83% |
-3.11% |
Activity |
|||
---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Cohort A |
41.58% |
42.44% |
0.87% |
Cohort B |
44.29% |
45.70% |
1.41% |
Cohort C |
45.33% |
44.43% |
-0.91% |
Cohort D |
42.46% |
38.20% |
-4.27% |
Activity |
|||
---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Cohort A |
57.17% |
59.59% |
2.42% |
Cohort B |
59.15% |
56.09% |
-3.07% |
Cohort C |
63.56% |
54.70% |
-8.85% |
Cohort D |
54.47% |
49.62% |
-4.85% |
Whilst straight-lining appears to have risen slightly for year three online respondents in cohorts A and B, apart from the study work type in cohort B, there is a drop in levels of straight-lining in every group for cohorts C and D following the introduction of the carousel. It seems clear that straight-lining levels have dropped as a result of this change, pointing towards a potentially positive improvement in data quality. This is as hoped, as questions presented in a grid-layout can lead to satisficing behaviours which carousels can reduce (DeLeeuw, 2018).
Levels of straight-lining in different question responses
To aid in assessing the reduction in the levels of straight-lining following the implementation of the carousel further Table 19 indicates the differences in the common selections made for straight-lining respondents online in cohorts C and D. It is based on graduates who answered all three questions and highlights whether they did not straight-line, or the responses straight-lined on for those who had non-differentiation of responses.Table 24:Differences between the overall common responses provided when graduates are straight-lining, including differences in straight-lining between year two and three of the graduate voice questions, when graduates have answered all three questions online
|
Cohort C |
Cohort D |
||||
---|---|---|---|---|---|---|
|
Year 2 |
Year 3 |
Difference |
Year 2 |
Year 3 |
Difference |
1 |
3.99% |
2.85% |
-1.14% |
4.93% |
3.11% |
-1.82% |
2 |
0.76% |
1.09% |
0.33% |
1.27% |
1.71% |
0.44% |
3 |
1.79% |
1.26% |
-0.53% |
1.75% |
1.33% |
-0.42% |
4 |
15.49% |
20.80% |
5.31% |
12.72% |
16.77% |
4.05% |
5 |
27.10% |
21.58% |
-5.52% |
20.91% |
15.17% |
-5.74% |
Not Straight-lined |
50.87% |
52.41% |
1.54% |
58.42% |
61.91% |
3.49% |
The level of respondents in the ‘not straight-lined' group rose, as was expected from previous analysis. The group that saw the biggest drop in straight-lining was those with a score of 5 in both groups. Interestingly, scores of 2 and 4 saw a rise in straight-lining. As mentioned previously, straight-lining responses can be valid, which is important to consider when assessing it. Levels of straight-lining in general are highest in the higher scores of 4 and 5, which indicate positive responses to the questions and are popular responses for graduates whether straight-lining is present or not. It is important to consider that selection of responses will be influenced by other factors, not just the layout of the questions.
Straight-lining and indicators of reluctance to respond to assess valid straight-lining
As discussed previously, valid straight-lining can occur in a survey (Reuning and Plutzer, 2020). The graduate voice questions arguably have a high chance of valid straight-lining, due to the limited responses that can be provided to the questions and the fact that they are all positively worded. Indeed, these measures are likely to be highly correlated for many graduates. As a result, it can be useful to investigate other quality indicators combined with straight-lining to get a better understanding whether there is a prevalent issue. Previously, research by Cole, McCormick and Gonyea (2012) has used the contact point at which a respondent answered a survey to indicate potentially reluctant respondents and it found that items saw higher levels of straight-lining at later points in data collection. This was more obvious in some items than others. As mentioned, questions in a grid format have also been found to increase straight-lining. To this effect, comparisons in Figure 8 to highlight the relationship between the last completion date of a graduate’s survey response and the percentage of responses that were straight-lined online when all questions were answered, to potentially indicate if straight-lining is being used as a satisficing response strategy for the graduate voice questions online and to determine if data quality reduces as the cohorts progressed, with CATI results included in Figure 9 for comparison purposes.
Interestingly, comparisons for the online mode in Figure 8 highlight the overall drop in straight-lining across most of the cohort, illustrating the reduction in straight-lining overall since the introduction of the carousel. Levels of straight-lining do become more similar to year two towards the end of the cohort, perhaps indicating a slight reluctance showing in year three. Although the difference is not large it does appear that there is a slight increase in straight-lining as the cohort progresses in year three, perhaps clearer as a result of the carousel removing the grid-layout and discouraging straight-lining that was previously made easier as a result of the presentation of the questions, although an increase can be seen in both years. However, it is clear that across year three, even at the point of the most potential reluctance towards the end of the surveying period, online straight-lining levels have improved compared to year two. The CATI mode does also see an increase towards the middle and end of the cohort, but the two years seem very similar. As mentioned, replacing a grid-layout question with a carousel format has also been found to reduce differences in levels of straight-lining between modes (DeLeeuw, 2018), and this appears evident when comparing Figure 8 and Figure 9, where year three online levels of straight-lining are now closer to the day-to-day levels in the CATI mode, perhaps highlighting a reduction in invalid straight-lining as a result of the change.
Conclusions
Both the optimisation of the graduate voice questions on mobile in cohort A of year three and the addition of the auto-advance carousel in cohort C of year three appears to have an impact on the coverage and quality of responses in different ways. Whilst the optimisation on mobile appeared to increase responses to all questions, the carousel may have reduced it slightly, however, it is important to note that total non-response to all questions reduced following the implementation of the carousel. Equally, it is worth noting that levels of response to all questions is still high in the final two cohorts. In addition, the quality of the data may have improved following the implementation of the carousel, with a reduction in levels of straight-lining in these cohorts in year three. The assessment of reluctance helps indicate the possibility of both valid straight-lining, or reluctance causing satisficing behaviours. Interestingly, the increase in straight-lining over time (with time being taken as a potential indicator of more reluctance to respond) is clearer online in year three after the introduction of the carousel. The introduction of the carousel may have helped to remove the encouragement to straight-line due to the presentation of the questions, as well as potentially improving the comparability of responses received through different modes (DeLeeuw,2018).
Following this assessment, it appears that the carousel may be improving data quality and it seems beneficial to continue presenting the questions in this format for now, however, continued monitoring of the carousel's performance, particularly over cohorts A and B, will aid in fully assessing its value. Equally, further consideration will be made of the question and potential changes that could be made to improve response to all the questions in the block. For year 4 of the survey changes have been made regarding the routing to graduate voice questions, which will now be based fully on the main activity of the graduate. This will lead to a change in the questions some graduates will see and the activity they will be prompted to reflect on and will hopefully lead to more accurate and thoughtful answers in some cases where graduates may not have found the questions as relevant in the past. The impact of this change will also be assessed in future.
References
Cole, J.S., McCormick, A.C. and Gonyea, R.M., 2012, April. Respondent use of straight-lining as a response strategy in education survey research: Prevalence and implications. American Educational Research Association Annual Meeting.
DeLeeuw, E.D., 2018, August. Mixed-mode: Past, present, and future. In Survey Research Methods (Vol. 12, No. 2, pp. 75-89).
McMahan, E.A. and Estes, D., 2011. Hedonic versus eudaimonic conceptions of well-being: Evidence of differential associations with self-reported well-being. Social Indicators Research, 103(1), pp.93-108.
Reuning, K. and Plutzer, E., 2020, September. Valid vs. Invalid Straightlining: The Complex Relationship Between Straightlining and Data Quality. In Survey Research Methods (Vol. 14, No. 5, pp. 439-459).
Schonlau, M. and Toepoel, V., 2015, July. Straightlining in Web survey panels over time. In Survey Research Methods (Vol. 9, No. 2, pp. 125-137).