Changes to the UK Performance Indicators
The introduction of the tariff score in the sector led to changes to the benchmark groupings from 2002/03. Most of the change was to the groupings of scores for A-levels and Scottish Highers, but in addition there was a new category for Baccalaureate (formerly included with up to 4 A-level points) and a category for students with both Vocational A-levels (VCE) and A-levels or Highers. The GNVQ level 3 category now contains students who have tariff scores just for VCE qualifications. The tariff score categories used were chosen so that as far as possible they are of equal size. These categories were used until a decision could be made on how to make use of the more detailed entry qualification data collected by HESA from 2007/08. More details on tariff data can be found in introduction of tariff.
As a result of the new HESA student record in 2007/08 and the availability of more detailed entry qualification data, the entry qualifications categories used in the production of the benchmarks were modified slightly (applicable to tables T1, T2, T3, T5, T7 and E1). For entrants from 2010/11 onwards, the coding frame for the highest qualification on entry (HESA field QUALENT3, previously QUALENT2) also changed. As a result, the entry qualification groups were redefined using the new coding frame and are not strictly comparable with those used previously, see definitions for more details.
The availability of richer qualification information from 2007/08 allowed analysis of entry qualification at a more detailed level, separating out those students with certain combinations of grades of A-levels and Highers, see definitions for more details. From 2008/09, these modified entry qualification groups were used in the widening participation benchmarks (T1, T2); from 2009/10, the same qualification groups were used in the non-continuation benchmarks (T3) and from 2012/13, the same qualification groups were used in the projected outcomes (T5), DSA (T7) and employment (E1) benchmarks, the latter used a grouped up version. These changes to the entry qualification groups have altered the benchmarks for some HE providers that have large proportions of students with high A-level and Highers grades.
The qualifications included within the tariff calculations have changed over time, for full details, please refer to the UCAS list of tariff bearing qualifications.
Detailed information on the how the benchmarks groups have changed over time can be found within the definitions document.
Changes to the Disabled Student Allowances administered through Student Finance England took effect in the academic years 2015/16 and 2016/17. These changes rebalanced responsibility for supporting disabled students, with HEPs providing certain aspects of disability related support previously funded via the DSAs, including funding to support specialist equipment and accommodation costs. Further details have been published on the changes to DSA. As a result of these changes, the Performance Indicator reported in Table T7 - Participation of UK domiciled students in higher education who are in receipt of Disabled Students' Allowance for the 2016/17 cohort is not consistent with previous years.
Due to a change in the accessibility of DSA funding, discrepancies between The Open University and rest of the sectors approach to DSA are now thought to be negligible. The Open University have been included in benchmarking calculations from 2017/18 and in the experimental release from 2016/17.
|Funding to support||Recent changes|
|Non-medical help (bands 1 to 4):
- Band 1: Practical support assistants
- Band 2: Enhanced support assistants
- Band 3: Specialist enabling support
- Band 4: Specialist access and learning facilitators
Costs no longer met by DSAs from 2016-17
Costs no longer met by DSAs from 2016-17
Costs continue to be covered by DSAs (except specialist transcription services)
Costs continue to be covered by DSAs (except specialist transcription services)
|Specialist equipment e.g. computer software for students with specific learning difficulties||New students recommended and agreed a computer via DSAs required to pay £200 towards the cost from 2015-16.
Standard computer peripherals and software funded by exception only from 2016-17.
Individual printing and scanning devices only funded through DSAs if the need cannot be met through other measures.
|Accommodation costs||Additional accommodation costs funded by exception only from 2016-17|
Contextual information on the number of students who classified themselves as having a disability by HE provider for 2015/16 and 2016/17 can be found in the following disability table. The data covers all students and is not restricted to the standard UK Performance Indicators population of UK domiciled undergraduates.
Following the 2013 fundamental review of the UK Performance Indicators (UKPIs), the UK Performance Indicators Steering Group (UKPISG) committed to undertake an in-depth review process of the UKPIs relating to widening the higher education participation of under-represented groups. Having made progress with this review, UKPISG confirmed the discontinuation of the UKPI based on National Statistics Social Economic Classifications 4 to 7 from the 2017 and subsequent publications.
Stakeholder consultation within the 2013 fundamental review of the UKPIs had reinforced existing UKPISG concerns that the data underlying the NS-SEC based indicator was of poor quality. Further advice was received from the UK Performance Indicators Technical Group in February 2014 [pdf 58 KB] (in an assessment of how well the existing WP UKPIs match the principles agreed for UKPIs [pdf 16 KB]). And roundtable discussions held with an expert group (convened to advise on the in-depth review of the widening participation UKPIs) in December 2014 provided further confirmation [pdf 86 KB] that the data used for the indicator is widely acknowledged to be of poor quality.
It is on the basis that this evidence conflicts significantly with the agreed principle (B2) for UKPIs that the UKPISG have confirmed the removal of this indicator. Principle B2 states that ‘UKPIs should be evidence-based and statistically robust, conforming to recognised best practice in the production of statistical information. Data used for the indicator should be of high quality collected in a consistent and fair way across the sector; [it] should have a good sample base, use consistent definitions, and use a transparent methodology’.
HEFCE’s July 2015 circular letter "Invitation to comment on future changes to the UK Performance Indicators" provides further context for the decision to remove the NS-SEC based indicator, as well as information on other changes to be implemented in the 2016 publications of UKPIs.
2007/08 - 2009/10
For the 2008/09 academic year, UCAS changed the question that informs NS-SEC for the majority of applicants. The question reverted back to the original wording for 2009/10 applicants.
For applicants up to and including the 2007/08 academic year and for the 2009/10 academic year, UCAS asked:
"If you are under 21, please give the occupation of your parent, step-parent or guardian who earns the most. If he or she is retired or unemployed, give their most recent occupation. If you are 21 or over, please give your own occupation."
For applicants for 2008/09 entry, the question changed to:
"If you are in full-time education, please state the occupation of the highest-earning family member of the household in which you live. If he or she is retired or unemployed, give their most recent occupation. If you are not in full-time education, please state just your own occupation."
The change in question between 2007/08 and 2008/09 had an impact on the NS-SEC indicators, causing the proportion of students classified as ‘unknown' and those classified as falling into NS-SEC groups 4 to 7 to rise. Given these differences and the lack of any significant external changes to the system, it is safe to conclude that the change in question means that the NS-SEC data for 2008/09 is not comparable with that published previously and as a result, the 2008/09 NS-SEC data was published separately in tables T1ai, T1bi and T1ci and labelled as age-adjusted NS-SEC.
Although the question reverted back in 2009/10, there may still be a slight impact on the NS-SEC indicators resulting from applicants who applied using the 2008/09 form, but deferred entry until 2009/10. However, the 2009/10 will be more comparable with that published up until 2007/08 than with the 2008/09 data. Therefore, NS-SEC time series data published within the summary excludes data for 2008/09.
For the 2001 census, a new classification, National Statistics - Socio-Economic Classification (NS-SEC), was developed to replace Social Class. It took into account new work patterns in the UK and the changes in education levels required for and the status of, large numbers of occupations. This new classification was used for the social class PI from 2002/03 and called the SEC indicator. More details on the differences between SEC and social class can be found in changes between 2001/02 and 2002/03 below.
From 2014/15, for new entrants, the last provider attended field (PREVINST) must contain a valid UKPRN or a valid generic code (see the HESA coding manual for details), rather than the historic UCAS, department and HESA school codes. These have been mapped to school type and grouped up to form the state school marker where appropriate. Where an unknown or invalid PREVINST code has been supplied, these students have been excluded from the calculations of the state school indicator. Due to the changes in the coding frame there is likely to an impact on the quality of the school type data.
A notable change from the 2014/15 UK Performance Indicators is the allocation of The Open University (OU) students to England, Wales, Scotland and Northern Ireland. Previously all OU enrolments and qualifications were counted within England, where the OU has its administrative centre. From this release onwards enrolments and qualifications registered at one of the OU’s national centres in Wales, Scotland and Northern Ireland will contribute to the totals of those countries where statistics are shown by country of HE provider. For tables which include part-time students, a total row has been provided for the OU which has been produced consistently with previous published versions. Total rows for the OU will be produced for 3 years to enable the continuation of time series. From the 2019 publication, the OU will only be shown under the national centres.
Following the recommendations of the UK Performance Indicators review, the UK Performance Indicators Steering Group (UKPISG, formally PISG) agreed that the postcode indicator should be replaced. From 2005/06 (T1a only) and 2006/07 (T1b-T1c), the existing Super Profiles low participation indicator was replaced with a new indicator based on the revised POLAR definitions of low participation areas, using the lowest quintile of wards as low participation. More details on the change in methodology can be found in changes to the postcode indicator.
From the 2011/12 publication onwards, the low participation data uses the updated POLAR3 classification, more information on the POLAR3 classification and the files used in the mapping can be found in the definitions document. Although the POLAR3 low participation data is calculated in a similar way to the POLAR2 low participation data but using more up to date data, the two datasets are not strictly comparable. For time series purposes, the indicators for 2009/10 to 2011/12 have been produced using both POLAR2 and POLAR3 data. The POLAR3 data was updated in July 2015 and this updated version was used in the creation of the Performance Indicators from 2014/15. The updated version includes new postcodes which have been added in the UK in recent times.
From the 2018/19 publication onwards, the low participation data uses the updated POLAR4 classification, more information on the POLAR4 classification and the files used in the mapping can be found in the definitions document. The POLAR4 data is calculated in a different way to previous POLAR mappings and therefore the two datasets are not strictly comparable. For time series purposes, the indicators for 2015/16 to 2017/18 have been produced using both POLAR3 and POLAR4 data and are published in the Experimental indicators in the archive of releases.
From 2007/08 onwards, the low participation data has not been produced for HE providers in Scotland. The low participation POLAR2/POLAR3 measure used in Tables T1, T2 and T3b is based on a UK wide classification of areas into participation bands. The relatively high (in UK terms) participation rate in Scotland coupled with the very high proportion of HE that occurs in FE colleges means that the figures for Scottish HE providers could, when viewed in isolation, misrepresent their contribution to widening participation.
At the Performance Indicators Steering Group (PISG) held on 16 September 2013, it was agreed that, henceforth, the Performance Indicators (PIs) would be referred to as the UK Performance Indicators for Higher Education, or the UKPIs. The groups governing these measures would be renamed accordingly, as the UK Performance Indicators Steering Group (UKPISG) and the UK Performance Indicators Technical Group (UKPITG).
From the 2012/13 publication, all tables and charts have been modified to include UK domiciled in the title. This is not a change to the population, but has been included to clarify that data within the UKPIs is restricted to those students who were living in the UK prior to entering higher education.
Since HESA took over the publication of the UK Performance Indicators, there have been four main changes to the subject groups used in the benchmark calculations. These took place in 2002/03, 2004/05, 2007/08 and 2012/13.
In 2002/03 a new subject classification was introduced called the Joint Academic Coding System (JACS). This subject classification looks similar to that previously published but has been devised in a different way. Therefore subject data published from 2002/03 is not comparable to that previously published.
There was also a change to the subject area groups in 2004/05. The UK Performance Indicators Steering Group (UKPISG, formally PISG) agreed to use the standard HESA JACS subject groupings, but with medicine & dentistry grouped with veterinary science. Previously, the benchmarks used more aggregated JACS subject groups. This change had very little effect on the benchmarks.
From 2007/08 onwards, subject codes JACS2 have been used. There has been little change at subject area level, but the following subjects have moved to a different subject area:
"C230 Plant biotechnology Involves the molecular and microbial manipulation of plants" has moved from 3 Biological Sciences to "J710 Plant biotechnology (crops, trees, shrubs, etc.) Involves the molecular and microbial manipulation of plants" which falls within subject area 9 Engineering & Technology.
"C560 Biotechnology The molecular and microbial bio-organisms for processes such as fermentation and enzyme technology." has moved from 3 Biological Sciences to "J700 Biotechnology The use of biological processes or organisms for the production of materials and services. Biotechnology includes the use of techniques for the improvement of the characteristics of economically important plants and animals and for the development of micro-organisms to act on the environment" which falls within subject area 9 Engineering & Technology.
For 2012/13, a review of a selection of the existing subject areas of the JACS coding system resulted in the implementation of a revised 'JACS3' version of the coding frame. The JACS2 version it replaced had been in existence since 2007/08. The full listing of JACS3, including a mapping between JACS2 and JACS3, can be found here. JACS3 and JACS2 are not directly comparable at any level other than subject area, although many codes have been retained in the newer coding frame.
From 2012/13, the JACS3 subject areas have been used within the widening participation benchmarks and will be used within the non-continuation benchmarks from 2013/14.
From 2011/12, percentages and indicators calculated on populations which contain less than 22.5 individuals have been suppressed and represented as a '..'. Prior to 2011/12, a suppression level of less than 20 individuals was used. This brings the suppression levels more in line with other publications such as UNISTATS.
From 2011/12, data within the employment indicators have been suppressed and represented as a blank cell where the response rate for a HE provider is less than 85 per cent of the target response rate (68.0% for table E1a, 59.5% for tables E1b-E1d). Data for these HE providers have been excluded from all indicator and benchmark calculations.
From 2012/13, where data for a HE provider has been suppressed, they have been removed from all totals and benchmark calculations, see PITG paper 13/01 [pdf 65 KB]. Also from 2012/13, in the event that a HE provider's data contains more than 50 per cent unknown values within the benchmarking factors, the benchmark has been suppressed and represented as a blank cell. This data has, however, been retained and included in totals and benchmark calculations for the rest of the sector. See PITG paper 13/03 [pdf 71 KB] and PITG minutes, 28 February 2013 [pdf 114 KB].
In 2011/12, the questions on the Destinations of Leavers from HE (DLHE) questionnaire were modified and as a result, the concept of activity was redefined for use in HESA publications. It was agreed by the PITG and the PISG to use the standard HESA publication categories for the employment indicators from 2011/12 onwards. Also in 2011/12, the DLHE population was extended to include leavers with additional qualifications. The only undergraduate qualifications now excluded from the DLHE population are: Intercalated degrees; Awards for visiting students; Post-registration health and social care awards; Professional qualifications for serving schoolteachers and Awards of credit. See DLHE collection 2011/12 for more details.
Due to changes on the DLHE questionnaire and hence changes to the derivation of the activity categories, the employment indicator for 2011/12 onwards is not strictly comparable with the indicator prior to 2011/12.
In 2018/19, the standard UK Performance Indicators population was extended to include Alternative Providers. HESA published a series of experimental widening participation and non-continuation indicators (excluding projected outcomes) using an enhanced methodology and with extended coverage to include Alternative Providers. Users of the data were invited to provide feedback on the revised methodology to help inform the decision on using this as the default method going forward. Based on the feedback received, and in consultation with key stakeholders, the decision was made to publish the indicators using only the revised methodology from 2020. These are no longer be labelled as experimental statistics and became the default method for producing the indicators going forward.
In 2010/11, the standard UK Performance Indicators population was extended to include students on low credit bearing courses (instances with 10 percent FTE or less), regardless of whether or not a reduced return was submitted. Historically, the standard UK Performance Indicator population did not include students on low credit bearing courses as some of the key fields used to define the population were not required for these students and information submitted in these fields was not retained. From 2007/08, a reduced return was still acceptable for students on low credit bearing courses, but where data was returned, the information was retained.
Analysis of the 2009/10 data showed that over 90 percent of UK domiciled low credit bearing undergraduate students were already included within the UK Performance Indicators population. It was agreed by the UK Performance Indicators Technical Group (UKPITG, formally PITG) that the remaining low credit bearing UK domiciled undergraduate students would be included within the population from 2010/11. This change to the population brings the UK Performance Indicators population more in line with the HESA Session population. Since a large majority of low credit bearing students are mature, part-time other undergraduates, the main impact is on table T2b.
Due to changes in funding arrangements, the Scottish Agricultural College are now funded by the Scottish Funding Council and as a result are included in the UK Performance Indicators from 2010/11.
From the 2009/10 edition, the definition of part-time mode of study was modified slightly:
Part-time mode of study includes students returned as having part-time mode of study and FTE (STULOAD) greater than 50 or, where a student instance spans HESA reporting years, the sum of FTE load A (LOADYRA) from the reporting year and FTE load B (LOADYRB) from the previous year is greater than 50. Previously only part-time students with FTE greater than 50 were included. The impact of this change is minimal for the majority of HE providers.
The non-continuation indicators (series T3) was been extended to cover part-time first degree students and was published as table T3e from 2008/09. The part-time non-continuation indicator differs from the current full-time indicators in that in looks at continuation in the two years following entry. It is further restricted to only include those students studying at least 30% of an FTE. Further, part-time students are not considered if they are found to be undertaking more than one course of HE at a HE provider in the year of entry or they have undertaken first degree study in the year prior to entry or they leave the programme of study within 50 days of commencement. For details, please refer to the technical document. The subject and entry qualification groups used in the part-time benchmarks differ slightly from those used in the full-time indicators. Please refer to the benchmarks document for full details.
The coverage of the UK employment Performance Indicators was extended from 2008/09 to cover part-time first degree leavers plus full-time and part-time other undergraduate leavers. These were published alongside the data for full-time first degree leavers as tables E1a, E1b, E1c and E1d respectively.
For more detailed information on the how the benchmark subject groups have changed, please refer to the definitions document.
Overview of earlier changes
From 2005/06, there was quite a major change in the way in which the low participation data has been produced in the Widening Participation Indicators.
Following the recommendations of the UKPI review, the UK Performance Indicators Steering Group (UKPISG, formally PISG) has agreed that the postcode indicator should be replaced from this year. The new indicator is based on the revised POLAR definitions of low participation areas, using the lowest quintile of wards as low participation. Further explanation and details of the quintiles used, and their relationship to postcodes, can be found here (see the CSV file at the bottom of that page for the lookup table).
A POLAR2 low participation location adjusted benchmark has been provided for all HE providers. For Scottish HE providers, it is recommended that the location adjusted benchmark is used in preference over the raw benchmark, as location adjustment takes account of variation in participation rates between the different Home Nations, and in particular the higher participation rate in Scotland. See discussion below.
The UKPI review noted that the existing postcode indicator was based on an old geodemographic classifier, and proposed two replacements. One was to be based on the Index of Multiple Deprivation (IMD), and the other on the POLAR2 definition of low participation areas.
There was general agreement that a replacement was needed, and support for both of the proposals was expressed. However, it was clear that further work on the detail was needed. One of the difficulties with the IMD was the difficulty of comparison across the countries of the UK, with definitions used for the index in one country being slightly different from those in another. In the end, it was accepted that these difficulties could not be overcome in the short term, although PISG are looking for an alternative measure that would be acceptable.
It has therefore been agreed that initially there should be just one replacement indicator, based on the POLAR2 definition. For this indicator, which is applied to young and mature, full-time and part-time entrants, a 2001 Census Area Statistics ward is defined as low participation if its participation rate places it in the bottom 20 per cent of wards ranked by this measure.
The value of this new indicator (POLAR2) for young full-time first degree entrants across the sector is 9.0 percent, rather less than the value for the old indicator (Super Profiles) of 14.7 percent (14.0 percent in 2005/06). The benchmarks are calculated as before, and take account of this reduced average value.
For the majority of HE providers, the change does not adversely affect the indicator. As with the other indicators, there will be fluctuations at HE providers year on year, but it is not anticipated that these will be any greater than previously.
Scottish HE providers
The one group which is affected is the set of Scottish HE providers. The revised participation rate is higher in Scotland than elsewhere in the UK, and so the number of students from low participation areas in Scotland is lower than in the rest of the UK. The percentage of students from low participation areas at Scottish HE providers has fallen substantially (from 18.5 percent in 2006/07 using Super Profiles to 3.2 percent in 2006/07 using POLAR2). The benchmark shows that most of the HE providers in Scotland are significantly below the UK average, however the location-adjusted benchmark allows for this between-country difference.
There are two main reasons why this participation-based classification of areas in Scotland differs substantially from the classification used previously for the performance indicators. Firstly, rather than use a geodemographic classifier, the new method uses Census Area Statistics wards. Previously, the classifier clustered together very small areas (enumeration districts or data zones) that were judged to be similar in terms of the values of a range of census 1991 variables. The resulting 160 GB clusters each then contained a large enough population for which to calculate participation rates, and it was on the basis of these calculated rates that the designation as low participation or not was made. The disadvantage of this was that areas within a cluster were not necessarily close together, so many of the clusters formed using Scottish data zones also included numbers of English enumeration districts. This meant that the high participation rates in many of the poorer areas of Scotland (high relative to those in England) were ‘diluted’ by those English areas. Now that the more numerous and smaller wards are being used as the units for calculating participation rates, this dilution no longer occurs, and the high participation rates in Scotland are correctly recorded.
The second important reason for the changes is that the measure of participation is much improved from what we had available when the Performance Indicators were first published in 1999. In particular the new participation measure includes HE that takes place in FE colleges and is returned on non-HESA records. Previous work by HEFCE (2005/03) showed that this type of participation in FECs is much more prevalent in Scotland than in England and Wales, particular so for poorer areas (HEFCE 2005/03, page 42).
Both these effects act to substantially reduce the proportion of areas in Scotland that are classified as having low participation relative to the rest of the UK. In turn this has acted to lower the low participation area performance indicator for any HE provider that has substantial recruitment from Scotland.
More information on the localised effects
Under certain conditions the location of a HE provider can have an impact on the low participation neighbourhood indicator, making it appear different from the other widening participation indicators. In particular, there are several characteristics which have an impact on HE providers in London and Scotland.
In London, HE providers tend to recruit a high proportion of students from this area. The participation rate overall is higher in London than for most other parts of the country. These factors taken together mean that areas in London may be less likely than similar areas elsewhere to be classed as low participation. As a result, HE providers in London tend to have a lower proportion of students from low participation neighbourhoods relative to their benchmarks.
In Scotland, again HE providers tend to recruit a high proportion of students from the local area. The old low participation indicator clustered together very small areas (enumeration districts or data zones) that were judged to be similar in terms of the values of a range of census 1991 variables. The resulting 160 GB clusters each then contained a large enough population for which to calculate participation rates, and it was on the basis of these calculated rates that the designation as low participation or not was made. The disadvantage of this was that areas within a cluster were not necessarily close together, so many of the clusters formed using Scottish data zones also included numbers of English enumeration districts. This meant that the high participation rates in many of the poorer areas of Scotland (high relative to those in England) were ‘diluted’ by those English areas. The POLAR2 low particiaption method uses the more numerous and smaller Census Area Statistics wards as the units for calculating participation rates so this dilution no longer occurs and the high participation rates in Scotland are correctly recorded. The participation measure also includes HE that takes place in FE colleges and is returned on non-HESA records. Previous work by HEFCE (2005/03) showed that this type of participation in FECs is much more prevalent in Scotland than in England and Wales, particular so for poorer areas (HEFCE 2005/03, page 42). Both these effects act to substantially reduce the proportion of areas in Scotland that are classified as having low participation relative to the rest of the UK. In turn this has acted to lower the low participation area performance indicator for any HE provider that has substantial recruitment from Scotland. For this reason, low participation data for Scottish HE providers have been excluded from the Performance Indicators.
Measuring effects of locality
Supplementary Table SP1 shows the percentages of young entrants from each of the regions of the UK who come from low participation neighbourhoods (POLAR2); NS-SEC Classes 4, 5, 6 and 7; and state schools. The scale of the differences between regions means that HE providers which recruit most of their students locally may find they have characteristics quite different from the national average.
Because of these differences, we have looked at ways in which a student’s domicile could be incorporated into the existing benchmarks of the widening participation indicators. Using the same methodology as is used for the current benchmarks, and taking the student’s region of origin as another factor, we have produced a value that will give an indication of how important the location factor is. This is the location-adjusted benchmark.
For HE providers which recruit from across the UK, there is very little difference between the standard benchmark and the location-adjusted benchmark. HE providers which recruit more locally will have larger differences, possibly 3 or 4%, between the original and the location-adjusted benchmark. These larger differences show that the indicator is affected by the characteristic of the area the HE provider recruits from. In general, the greatest differences occur for the low participation indicator, and the smallest for the NS-SEC indicator.
In considering how best to measure locality effects, a major concern was raised. By allowing for the effects of locality, there is a danger that what we are trying to measure could be partly obscured. Differences between geographical areas may be caused by disparities between HE providers, or these disparities may be the result of geographical differences. Until we have resolved this circularity we need to be careful in making allowances for geographical effects.
There is a further difficulty with the method used. In theory, if a HE provider situated in a region of low participation were to recruit predominantly from another region of high participation, that HE provider’s benchmark would not reflect its locality. Rather, it would reflect the locality from which its students were recruited. In practice that is unlikely to happen, partly because we have used region rather than some smaller geographical area as the basis.
The location-adjusted benchmark has only been used with the participation indicators, because of the known differences in the way these groups are spread across the country. They have not been used with the indicators of retention or non-continuation, nor is there any plan to do so, for two reasons. The major reason is that to include location as a factor in non-continuation would imply that people from different regions could have different continuation rates, even taking into account their subject of study and their entry qualifications. This would not be acceptable. A further reason is that the differences between the non-continuation rates for students from different regions is small. A location-adjusted benchmark for these indicators would therefore not provide any extra information.
When the tariff score was first introduced, the Performance Indicators Steering Group (PISG) agreed that the groupings to be used in calculating the PI benchmarks should be reviewed after two or three years, once sufficient data were available. However, the effect on the benchmarks for the state school indicators in particular was such that an earlier review was called for.
The review was carried out during this year, and the following paragraphs report the results of this review.
Review of tariff groupings for benchmark calculations
In calculating the benchmarks, students are divided into categories according to what subject they are studying and what qualifications they had on entry to their course. As most full-time students enter with A-levels or Scottish Highers, these are further sub-divided into groups according to the overall scores they obtained. Prior to 2002, the method of scoring was based on the best three qualifications obtained, producing a points score whose maximum value was 30. From 2002, the new tariff system was introduced, superseding the old system which was then no longer available. The tariff counts all qualifications obtained, and is calculated in a different way. It is not possible to obtain the old points score from the new tariff score, and so the tariff system has had to replace the points score in the calculations of the benchmark. This tariff system is not under the control of either HESA or HEFCE.
There are two main differences between the new tariff scores and the A-level points previously used. First, the relativities between different A-level grades and between A-levels and Highers are not the same for the tariff system as for the former points scores system. Secondly, and probably more importantly for this review, the cap on the number of examinations that could be included was lifted for the tariff. Under the old scheme, the top score was obtained by anyone who had at least three A grades at A-level. Under the new scheme there is no maximum score, so someone with exactly three A grades at A-level will have a lower score than someone with four B grades. This is further complicated by the fact that AS-level grades for those subjects that are not taken on to A-level are also included in the total tariff score.
If individual examination scores were available, then it would be relatively straightforward to either replace the cap, or produce a set of groups that took the individual values into account. However, the scores are recorded as a total for each qualification type – the total score for A-levels and the number of A-levels, the total score for AS-levels and the number of AS-levels, etc. In addition, to obtain the total tariff score it is not possible to sum the totals for the different groups, as there may be duplicate subjects between the groups.
A number of different options were examined. Firstly, the possibility of omitting the tariff score from the benchmark altogether was suggested. Entry qualifications were identified in the early stages of developing the PIs as an important factor contributing to differences between HE providers, therefore it was agreed that this suggestion should not be followed.
Secondly, it was suggested that only the tariff score for A-levels and Highers were used, instead of the total tariff. Using only the A-level scores would be technically feasible, but would change the benchmark of those HE providers that are not so selective, and would not necessarily change things significantly for the most selective HE providers. In addition, trying to incorporate the Highers score as well is difficult, as (a) there are two categories of Highers to be taken into account, reported separately and whose score cannot necessarily be combined; and (b) there are a number of students with both A-levels and Highers, again with scores that cannot necessarily be combined.
Thirdly, it was suggested that a group be defined to contain only those students who have at least three A grades at A-level, or five A grades in Scottish Highers. This is feasible, but has the perverse effect that while the most selective HE providers showed a drop in their benchmarks, as expected, those HE providers which generally selected from groups just below the three A grades, e.g. asking for two A grades and one B, showed an increase in benchmark.
In view of this, it was agreed by PISG that the current grouping of tariff scores would continue until a further detailed review could be carried out.
Further to this it was noted that, while the tariff score is the only way at present in which entry qualifications can be included in the benchmarks, PISG should look at what more information it should request HESA to collect to be more useful in this regard. The outcome of any review by PISG of how the benchmarks are calculated may require alterations to the specification of the dataset collected by HESA. Such changes to the data collection are subject to a record view and consultation process which takes a number of years to implement. It is accepted that the tariff does not provide the full range of information which could allow better differentiation between certain types of HE provider, particularly those that are highly selective, and PISG will work with all HE providers to see how this information can be improved.
- As HE providers become more different in nature, making comparisons between them within the Performance Indicators becomes less valid. Therefore only comparisons between HE providers of a similar nature should be made.
- View full information on how the benchmarks are calculated.
In 2002/03, there were a number of changes to the data collected, to some of the definitions and to a number of the calculations used. They are brought together here for convenience.
The HESA record was modified for the 2002/03 academic year and some of these changes have affected the performance indicators (PIs). In addition, some of the data formerly taken from UCAS records is now being taken from the HESA record.
Two variables, the previous school of the student and the social categorisation of their parents, have been obtained from UCAS records in the past. For 2002/03, UCAS supplied both these fields to HE providers, who checked this information and sent it on to HESA. In some cases, HE providers added extra information to that supplied by UCAS. The main effect for the performance indicators has been to change the proportion of known data available.
The other fields on the HESA record which changed this year and that are having the most effect on the indicators, are those providing information on entry qualifications and the subject fields. The entry qualification scoring system used in 2002/03 is the tariff system rather than the points scores used previously. Also the subject codes now being used are the JACS codes, agreed as a common coding system for use from 2002 entrants in higher education. Details of the effects of these changes are provided below. Note that these changes do not affect the benchmarks for the indicators of non-continuation or progression, which this year are based on 2001/02 entrants. The UCAS tariff page provides overview of its use within the benchmark calculations.
For the 2001 census, a new classification, National Statistics - Socio-Economic Classification (NS-SEC), was developed to replace Social Class. It took into account new work patterns in the UK and the changes in education levels required for and the status of, large numbers of occupations. This new classification was used for the social class PI this year and as a result this will now be called the SEC indicator.
In previous years, the six categories of Social Class were combined by taking classes I, II and IIIn as 'high' social class and classes IIIm, IV and V as 'low' social class. The new classification has seven analytic classes and groups 1 to 3 are used as 'high' class and 4 to 7 as 'low'. This has increased the overall percentage from low social class by over 2.5%.
In order to see how much of this change is due to the different definition, the mapping published by ONS has been used to obtain the NS-SEC categories from the occupation codes used in the 2001 HESA data collection. The data available on the HESA record were not complete for all HE providers in 2001, but provide some indication of the sort of value that this indicator would take. The results of this analysis show that the value of the indicator based on 2001 data but using the new definition would have been 27.8%. The following table compares this with the UK values published last year and this year.
|Percent from low social class in 2001, based on old definition (published)||25.8|
|Percent from low social class in 2001, based on new definition||27.8|
|Percent from low social class in 2002, based on new definition (in this document)||28.4|
This suggests that the increase since last year is mainly due to the change in definition.
The new subject categories are very similar to the old ones and this year the same groupings of categories have been used for the benchmarks as in previous years. There appears to be little effect on the benchmarks.
The introduction of the tariff score in the sector has led to changes to the benchmark groupings. Most of the change is to the groupings of scores for A-levels and Scottish Highers, but in addition there is a new category for Baccalaureate (formerly included with up to 4 A-level points) and a category for students with both Vocational A-levels (VCE) and A-levels or Highers. The GNVQ level 3 category now contains students who have tariff scores just for VCE qualifications.
The tariff score categories used have been chosen so that as far as possible they are of equal size. These categories will be used for two years and then a decision will be taken as to whether these are the most sensible categories to use. The main problem this year is that there are much larger numbers of students than usual who have A-level or Higher qualifications for whom there is no tariff score – over 25,000 this year among young entrants compared with under 3,000 last year. This may be a temporary problem, to do with deferred entry students, but it is necessary to obtain at least one more year's figures before the pattern can be clarified.
The tariff score, unlike the previous points score, does not put a cap on the number of qualifications that can be included, nor does it set a maximum value for the score. The effect for benchmark purposes appears to be that the high tariff score categories contain a greater proportion of students from the 'under-represented groups' than the 30 A-level point category did last year. This is particularly apparent for the state school indicator. For example, last year 68% of students with 30 A-level points came from state schools, while this year 77% of those with over 480 tariff points came from state schools. It is possible that this is a temporary blip for the same reason that the proportion of A-levels with unknown tariff scores has increased, but again until there are at least one further year's figures we cannot be sure. However, the effect this year has been to increase the benchmarks, particularly the state school benchmark, for HE providers which admit high proportions of students with high A-level scores.
Changes to calculations
Allocation of students to subjects
In previous years, a student with more than one subject of qualification aim was allocated to just one subject group, often 'Combined Studies'. This year, students have been allocated pro-rata to subject groups according to how many subjects they are studying and whether they are doing a balanced combination of subjects or a major/minor split. So, for example a student doing a balanced combination of subject X and subject Y would be allocated half to X and half to Y. This has affected some subject groups more than others and in particular has reduced the number of students allocated to the 'Combined Studies' group from about 25,000 to just under 2,000.
It is not straightforward to separate the effects of this change from those of other changes mentioned above. For example, the 'Combined Studies' group shows much higher percentages from all three under-represented groups than last year, but this could be due either to the HE providers where this subject is defined as such, or to the effects of the tariff scores, as well as to the reduction in size of the group, or indeed some other effects.
Linking files for progression
Until last year, the linking method used for tables T3, T4 and T5 took no account of the student instance (HIN) link introduced in 1998. Last year, table T3 used a revised linking method that used the HIN as a basis and this year the revised method has been extended for tables T4 and T5 as well.
A further change has been made to the way links, once found, are categorised; by treating all records with a zero value for student FTE as inactive, unless the academic year type is not standard.
The effect of these changes on the national figures is quite small. SomeHE providers show changes that appear relatively large, but these are mainly the small HE providers where relatively large changes are common. Nevertheless, it should not be assumed that changes between the years are simply due to an improvement or otherwise in the performance of the HE provider.
Calculation of projected outcomes
In addition to the change in linking methods, the states used in calculating the projected outcomes have been re-defined this year. The new states used in the transition matrix are defined in Annex C.
The calculations using the transition matrix are the same as last year, although the matrix itself is slightly smaller. Again, the effects at the national level are small and for most HE providers there is no discernible effect.
Last year, it was suggested that the efficiency figure should be dropped from table T5, but a number of HE providers felt that it was a useful addition to the other statistics available, particularly as it could be compared across the years. However, in view of the other changes being introduced this year it was decided that the efficiency calculations could no longer provide any continuity with previous years and so it was agreed to drop them.
From 2002/03, the employment indicator was based on the Destinations of Leavers from Higher Education (DLHE) survey which replaced the First Destinations Supplement. Prior to 2002/03, the First Destinations indicators were published by HEFCE in table E1.
The DLHE indicator follows the standard categories for publication and is defined as the number of respondents working or studying (or both) divided by the number of respondents working or studying or seeking work. All other categories are excluded from this indicator. It was agreed by the Performance Indicators Steering Group (UKPISG, formally PISG) that there should be no second indicator from 2002/03.
The benchmarks prior to 2002/03 were calculated using a model-based approach. The model was a multi-level model containing eight student level factors and three HE provider level factors. It was decided that this approach is no longer necessary and that a simpler method would be more suitable. In part this is because many of the factors included in the previous model were not statistically significant – all three HE provider level factors, for example, had little effect on the model and the social class and low participation neighbourhood variables also were not significant. With the number of factors reduced by removing these, the original method used for the other indicators becomes feasible and this is what has been done.
For most HE providers, the effect of this change on the benchmark was small. For HE providers with at least 200 responding students in the base population, the difference was no more than 1% when rounded to the nearest percent. For smaller HE providers and particularly those with the highest/lowest values for the indicators, the differences may be larger.