Benchmarks (applicable to tables T1 to T3, T7 and E1)
This page contains the technical details and assumptions made in producing the benchmarks and adjusted sector data for tables T1 to T3, T7 and E1. It also covers the location-adjusted benchmarks, and the calculations for the standard deviations. Details of the subject and entry qualifications breakdown used to obtain the benchmarks, and tables showing the numbers of students in each category and the proportion of students in each category with different characteristics, are given at the end of this document.
Most of the indicators included in these tables have benchmarks attached. The benchmarks are not targets. They are average values which will change from one year to the next if the overall value of the characteristic changes. They are provided to give information about the sort of values that might be expected for a HE provider’s indicator if no factors other than those allowed for were important. The corollary of this is that where differences do exist, this may be due to the HE provider’s performance, or it may be due to some other factor which is not included in the benchmark.
What should be included in the benchmark?
The factors to be included in the benchmarks need to have a number of characteristics. In particular they should:
- Be associated with what is being measured
- Vary significantly from one HE provider to another
- Not be in the HE providers’ control, and so not be part of their performance.
The first two characteristics were easy to identify. It was obvious from analysis already done that non-continuation rates, for example, varied between subjects, so subject as a factor had the first characteristic. It also had the second characteristic, as the proportion of students in each subject area varied between HE providers.
It was not so easy to identify factors with the third characteristic. For example, the subjects offered at a HE provider could be considered to form part of that HE provider’s performance, in that they could theoretically be changed, but in practice changing a HE provider’s subject mix substantially is very rare. After much discussion it was agreed that both subject of study and entry qualifications should be counted as outside a HE provider’s control.
The benchmarks were therefore set up to take account of the entry qualifications of a HE provider’s students, the subjects they studied, and their age. It needs to be stressed that because a difference between HE providers may be accounted for by differences in the subject or entry qualification profiles of the HE providers this does not imply a justification of that difference. The purpose of the benchmarks is to allow any discussion of the reasons for the differences to be carried out on an informed basis.
The benchmarks used for the part-time non-continuation indicators (table T3e) use different groupings of subject of study and entry qualification. Age is not a factor used in the benchmarks for table T3e.
The employment indicator (E1) benchmarks take into account a wider range of factors than those for the other indicators.
For full details of the factors used in the benchmarks, please refer to the definitions document.
Factors used in the benchmarks
|Number of categories|
|Factor||T1, T2a, T2c||T2b||T7||T3a, T3d||T3b, T3c||T5||T3e||E1|
|Subject of study (2004/05 publication onwards)||18||18||18||18||18||18||14||18|
|Subject of study (prior to 2004/05 publication)||13||13||13||13||13||13||N/A||13|
|Entry qualifications (2017/18 publication onwards)||26||26||22||26||26||26||N/A||N/A|
|Entry qualifications (2012/13 publication onwards)||26||26||22||26||26||22||8||11|
|Entry qualifications (2011/12 publication)||26||26||22||26||26||22||9||10|
|Entry qualifications (2010/11 publication)||26||26||22||28||28||22||9||10|
|Entry qualifications (2009/10 publication)||28||28||22||28||28||22||9||10|
|Entry qualifications (2008/09 publication)||28||28||22||22||22||22||9||10|
|Entry qualifications (2007/08 publication)||22||22||22||22||22||35||N/A||10|
|Entry qualifications (2004/05 - 2006/07 publication)||22||22||35||22||22||35||N/A||10|
|Entry qualifications (2003/04 publication)||22||22||36||22||22||35||N/A||10|
|Entry qualifications (2002/03 publication)||22||22||36||21||21||35||N/A||10|
|Age on entry (Young / Mature / unknown)||N/A||3||N/A||3||N/A||3||N/A||3|
|Government Office Region of domicile (location adjusted only)||13||13||N/A||N/A||N/A||N/A||N/A||N/A|
|Ethnicity (2007/08 publication onwards)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||5|
|Ethnicity (prior to 2007/08 publication)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||4|
|Sex (2012/13 publication onwards) - Male / Female / Other||N/A||N/A||N/A||N/A||N/A||N/A||N/A||3|
|Gender (2007/08 - 2011/12 publication) - Male / Female / Indeterminate||N/A||N/A||N/A||N/A||N/A||N/A||N/A||3|
|Gender (prior to 2007/08 publication) - Male / Female||N/A||N/A||N/A||N/A||N/A||N/A||N/A||2|
Using the benchmarks
The tables of indicators, by including all HE providers in one table, allow direct comparisons to be made both between HE providers, and between a HE provider and the sector. However, if the benchmarks were ignored such comparisons would not take account of the effects of different subject profiles or the different entry qualifications of the students. In general, indicators from two HE providers should only be compared if the HE providers are similar. If the benchmarks are not similar, then this suggests that the subject / entry qualification profiles of the HE providers are not the same, and so differences between the indicators could be due to these different profiles rather than to different performances by the two HE providers.
To compare a HE provider’s indicators to the sector, the benchmark should be used in preference to the overall sector average, again because it takes account of the subject and entry qualifications profile. We have provided a symbol beside the benchmark to show whether the difference between the indicator and the benchmark is significant.
Two symbols are used to show significance. A plus sign, ‘+’, indicates that the HE provider’s indicator is significantly better than its benchmark and a minus sign, ‘-’, indicates that the indicator is significantly worse than its benchmark. If there is a blank, the HE provider can say that its indicator is similar to the sector average allowing for subject and entry qualifications. HE providers whose indicator is significantly worse than the benchmark should look carefully at their figures to determine why the difference is occurring, bearing in mind that there may be some explanation based on factors that have not been taken into account.
In more technical terms, the +/- significance marker for HE provider i is defined as follows for tables T1, T2, E1:
- + if ((indicator-benchmark) > 3 AND (indicator-benchmark) >3 * standard deviation)
- - if ((benchmark-indicator) > 3 AND (benchmark-indicator) >3 * standard deviation)
- blank otherwise.
and for table series T3, T5:
- + if ((benchmark-indicator) > 3 AND (benchmark-indicator) >3 * standard deviation)
- - if ((indicator-benchmark) > 3 AND (indicator-benchmark) >3 * standard deviation)
- blank otherwise.
For HE providers in England, location-adjusted benchmarks are included in tables T1 and T2, in addition to the original benchmarks. For the new low participation widening participation indicator, location-adjusted benchmarks have been produced for all HE providers. These benchmarks take account of where a HE provider’s students come from, as well as their subject and entry qualifications. They are the result of work done by HEFCE to try and measure the effect of location on the access indicators in these tables.
The difference between the two benchmarks will show how much effect the region of origin of a HE provider’s students has on the indicator. Small differences, say no more than 1 or 2 per cent, suggest there is little effect. Either the HE provider recruits nationally, or it recruits locally from a region which is similar to the average of the UK as a whole. Larger differences mean that the geographical effect seems to be important.
Which benchmark is used will depend on the context. Both benchmarks provide information about the HE provider, and together they can shed light on why an indicator takes certain values. Note that in deciding whether two HE providers are similar, it is the original benchmark that is most informative – the fact that the location-adjusted benchmarks of two HE providers are different may only indicate that the HE providers are in different parts of the country. HE providers which do better against the location-adjusted benchmark than against the original one can point out that their location, in the sense of where their students come from, is affecting their results. A HE provider that does better against its original benchmark than against the location-adjusted benchmark may note that, although much of its success in recruiting students from low participation neighbourhoods, for example, is because of its location, nevertheless it is still taking in large numbers from such areas. In both cases HE providers should examine their results critically.
The location-adjusted benchmarks have not been included for HE providers in Wales, Scotland or Northern Ireland. The funding bodies for these HE providers have decided that such benchmarks could be confusing when applied to HE providers in these areas.
The factors allow the population to be broken down into well-defined categories, which are used in the calculation of the adjusted sector benchmark. In addition, the ‘sector population’ needs to be defined, as it is not the same in all cases. Each indicator relates to a specific sub-set of the HE provider’s students, for example, young full-time first degree students, or mature part-time undergraduates, and the adjusted sector benchmark is based on the equivalent sub-set of the sector population.
The sub-set of the population used will only contain students for whom information to calculate the indicator is available. The HE provider’s profile is also based only on those of its students with that information available. So, for example, if the information about school type is available for only 80 per cent of a HE provider’s students, the HE provider profile used to obtain the benchmark for the indicator will be based on that 80 per cent.
The number of categories used in the calculation of the benchmarks will depend on which factors are included. As there are 18 subject groups and 22 entry qualification groups, the original adjusted sector benchmark for the access indicators is based on 18×22=396 categories. For the non-continuation indicator for all ages, where age is also taken into account, the number of categories will double to 792 and for the location-adjusted benchmark for the access indicators, where region is also a factor, there will be 396×13=5148 categories.
Assume there are C categories, numbered from 1 to C, and U HE providers, numbered from 1 to U. Let the number of students in HE provider j in category k be njk. Then the total number of students at HE provider j is , the number from the sector in category k is , and the total number of students in the sector is .
Let p.k be the proportion of students in the sector from category k who have the characteristic of interest, for example, are from state schools, or have left HE after a year, and the equivalent proportion for HE provider j be pjk . The proportion of students in HE provider j with the characteristic of interest can be found as
This is the value of the indicator. If the proportion of students with the characteristic at the HE provider in each subject/entry qualification category was the same as in each category in the sector, then the overall proportion with the characteristic would be
This is what we have called the ‘adjusted sector benchmark’.
Another way of interpreting this is to say it is the value that the sector average would have if the sector students were split across the C categories in the same proportions as at the HE provider.
In general, small differences between an indicator and its benchmark are not important. However, it is not always obvious what constitutes a small difference. A standard deviation measures the amount by which one would expect a statistic to change, based solely on random sampling, and can therefore be used to say that a particular difference is significant or not. We have calculated the standard deviations of the differences between the indicators and their benchmarks, using a method developed by Professor David Draper and Mark Gittoes, formerly at the University of Bath. (Note that, because these are standard deviations of a statistic, they are more usually called standard errors.)
The mechanics of the calculations are explained below. More details of the statistical model used can be found in ‘Statistical analysis of performance indicators in UK higher education" by D. Draper and M. Gittoes, in JRSS Series A, volume 167, part 3, 2004.
Assume that there are C categories for the factors used in the benchmarks, and U HE providers. The complete set of C × U cells will be called the basic grid. The actual indicator at HE provider j, pj., is a weighted average of the form
The proportion of students in the sector in category k, p.k, is
and the benchmark for HE provider j, Ej, is
The difference between the indicator for HE provider j and its benchmark, Dj = pj. - Ej, can then be written as a weighted sum of all C × U cells in the basic grid:
Assuming that the njk students at HE provider j in category k are like a random sample (with replacement) from the population of all such future students, the values pik and Dj can be estimated as and respectively. The variance of is given by
We then have to estimate the variance of .
Draper and Gittoes show that a reasonable estimate of this variance is obtained by using a shrinkage estimation procedure. The value used here is
where , and is the estimated percentage with the characteristic of interest in the sector as a whole.
The square root of the estimated variance, which is the standard deviation of the indicator, can then be used to test whether the difference between the indicator and its benchmark is small or not. A difference that is less than twice the size of the standard deviation can certainly be said to be small, but we have been more conservative. In the tables, we have marked as ‘large’ those HE providers where the difference is both greater than three times the standard deviation and greater than three percentage points. This is to draw attention to areas where the difference is large in both statistical and practical terms.
If a HE provider is marked in this way, it should be taken as an invitation to investigate possible causes for the differences that have been identified, whether they arise from an indicator that is better than the benchmark (marked +), or worse than the benchmark (marked -). Where the difference is not marked, the indicator is either within the range that would be expected given random fluctuations, or is less than three percentage points away from the benchmark.
Two additional context statistics have been provided for the indicators in Tables T1, T2 and T3. These are:
- The average number of HE providers in the adjusted sector benchmark comparison
- The average proportion which the HE provider’s own students contribute to the benchmark.
These context statistics are provided for both the original benchmark and the location adjusted benchmark in Tables T1 and T2.
It is important to note that both of these statistics are average values. The numbers do not relate to specific HE providers. The interpretation is fairly straightforward. If the average number of HE providers in the comparison is small, say less than 20, then there are not many HE providers whose students are similar to the one in question. If the students at the HE provider contribute a large proportion to the benchmark, say more than 20 per cent, then the adjusted sector benchmark will be similar to the HE provider’s own value. For the original benchmarks, very few HE providers have a small number of comparable HE providers or contribute a large proportion to the benchmark. For the location-adjusted benchmarks, the number of comparable HE providers is likely to be smaller and the average contribution to the benchmark is likely to be larger than for the original benchmarks, and so the location-adjusted benchmarks are generally closer to the indicators than are the original benchmarks.
These statistics are designed, in particular, to pick up situations where the benchmark is of limited use because there are few other HE providers that really are comparable.
Average number of HE providers in comparison
The calculation of the two context statistics is based on the sector grid of entry qualifications and subject of study. For each cell in the grid, we count the number of HE providers with students in that cell. Let this number be nij for subject i and entry qualification j. For the HE provider of interest, call the number of its students t, and let tij be the number studying subject i with entry qualification j. Then for each cell compute , and sum these values over all cells. So the required value is:
average number of HE providers in comparison
Average contribution to benchmark
To find the contribution of the HE provider’s students to the benchmark, we use a similar weighted average, but now of the proportion of each cell’s students who come from the HE provider. If the number of students in the sector who are studying subject i and have entry qualification j is Tij, then in any cell the HE provider’s students form a proportion of the total, and the context statistic is the weighted average of these values, namely
average contribution to the benchmark
The adjusted sector benchmarks for the projected outcomes indicators in table T5 are obtained by adjusting the transition matrix rather than the actual indicators. The standard deviations have therefore been obtained by assuming students have been selected at random from the outcome categories. These are simplifications, but appear to give realistic results in most cases. Further details of the methods used can be found in the definitions and technical document.