About the UK Performance Indicators
UK Performance Indicators (UKPIs) are statistics which compare universities and colleges against benchmarks for Widening participation, Non-continuation, and the Employment or further study of graduates.
Key information about specific data items can be found in the Definitions.
No two HE providers in the UK are alike, but the UKPIs are designed to be objective and consistent measures of every HE providers’ performance. The UKPIs are not intended to be league tables and HE providers are intended to be compared against benchmarks.
The questions below address the design, use and interpretation of UKPIs. If you have a question which isn’t answered here, please let us know by email to [email protected].
All the tables are based on students who were residents of England, Scotland, Wales or Northern Ireland before starting their course. This is different from the standard HESA definition of ‘UK domicile’ which includes students from Guernsey, Jersey and the Isle of Man.
Most of the indicators are shown separately for young and mature students, where young students are those under 21 on 30 September of their year of entry to the HE provider. This is different from the standard HESA definition of age which uses a reference data of 31 August.
All tables are restricted to undergraduate students. Each table’s title, or its chosen filters, will indicate the level and mode of study of students included in the table.
For more detail see the Coverage and Population sections in the Definitions.
From 2018/19 the UKPIs include all higher education providers in England, Wales, Scotland and Northern Ireland. This includes those providers previously known as ‘Alternative providers’. The 2018/19 UKPIs use the methodology previously described as ‘Experimental’ from 2015/16 to 2017/18.
From 2015/16 to 2017/18 Alternative Providers were included in separate ‘Experimental Statistics’.
Up to 2015/16 UKPIs releases included only publicly funded Higher Education Institutions and the University of Buckingham.
Provider mergers and changes details changes to the population of HE providers over time.
Table T1 shows the percentage of young (under 21) entrants who:
- attended a school or college in the state sector
- come from a low participation neighbourhood (as denoted by its postcode) using the POLAR4 (OfS) method from 2018/19.
Table T2 shows the percentage of entrants who come from a low-participation neighbourhood and have no previous HE qualification.
Table T7 shows the percentage of students in higher education who are in receipt of Disabled Students' Allowance (DSA).
See the Widening participation page for the main tables, more information and further contextual data.
Table T3 shows the percentage of full-time entrants who were no longer active in HE in the following year.
Table T3e shows the percentage of part-time entrants who were no longer active in HE two years later.
Table T5 uses current data to project longer term outcomes. The table projects what proportion of students will eventually:
- gain a degree,
- leave with a different qualification,
- leave higher education altogether without any qualification or,
- transfer to another HE provider.
Note: Multiple years of data is needed to calculate these indicators. The latest data is used to calculate the non-continuation rate for earlier years.
See the Non-continuation page for the main tables, more information and further contextual data.
Up to 2016/17 Table E1 showed the percentage of graduates who are employed or in further study (or both), among all those who are employed, unemployed, or studying.
Table E1 was based on the Destinations of Leavers in Higher Education (DLHE) survey which asked leavers what they were doing six months after graduation.
For 2017/18 a new indicator is being developed using results of the Graduate Outcomes survey which asked graduates what they were doing fifteen months after leaving. Table G1 will be an experimental indicator for 2017/18.
The purpose of UK Performance Indicators is to:
- Provide reliable information on the nature and performance of the UK higher education sector
- Allow comparison between individual HE providers of a similar nature, where appropriate
- Enable HE providers to benchmark their own performance
- Inform policy developments
- Contribute to the public accountability of higher education.
Because there are such differences between HE providers, the average values for the whole of the higher education sector are not necessarily helpful when comparing providers. We therefore calculate a sector average and then adjust this for each HE provider. The adjustment takes into account the following factors which contribute to the differences between HE providers:
- subject of study,
- qualifications on entry
- age on entry (young or mature).
The average, adjusted for these factors, is called the ‘adjusted sector’.
For some of the participation indicators, we have also allowed for students’ region of domicile and produced ‘location-adjusted benchmarks’.
For the employment indicator, the benchmark used takes account of a wider range of factors.
There are two recommended ways of using the benchmarks:
1. To see how well a provider is performing compared to the HE sector as a whole.
It is usually preferable to compare a HE provider’s indicator to its adjusted sector benchmark in order to establish how well a provider is performing in the HE sector. When there is a significant difference between the HE provider's performance and the benchmark, we have marked it with a symbol. A 'plus' symbol is used for HE providers performing better than the benchmark and a 'minus' symbol for those performing worse.
2. To decide whether to compare two HE providers.
It is hard to meaningfully compare two HE providers that are very different. For example, an HE provider where most students enter with very good A-level qualifications should not usually be compared with one whose students come from a wider range of educational backgrounds. Similarly, a medical school and a college that mainly concentrates on engineering subjects are not comparable, as medical students have much lower non-continuation rates than engineering students.
If two providers are similar in terms of subject mix and entry qualification intake, the benchmarks should be similar and it is probably fair to make comparisons between them. If two HE providers have very different benchmarks, this is an indication that they are so different that comparing them would not give a helpful answer. But note that if two HE providers have very different location-adjusted benchmarks, this may just show that they recruit from different regions of the UK.
Note: Where the number of students within a specified population at an HE provider is small, the values of the indicator could be very variable and should be interpreted with care.
No. Each benchmark is an adjusted average and not a target. Some HE providers may use the UKPIs benchmarks as a useful guide in setting their own targets.
The indicator percentages and contextual data are subject to the HESA policy of Rounding and suppression to anonymise statistics. All counts are rounded to the nearest multiple of five, and percentages are suppressed for populations of less than 22.5.
From 1998 to 2018 the UKPIs were specified by the UK Performance Indicators Steering Group (UKPISG) With support from UK Performance Indicators Technical group (UKPITG). UKPISG last met in 2017 with governance of the UKPIs temporarily undertaken by HESA in consultation with the funding and regulatory bodies for higher education.
A new advisory group is currently being recruited to take forward governance of the UKPIs.
The first Performance Indicators were published by the Higher Education funding Council for England for the 1996/97 academic year. The UKPIs have been produced and published by HESA since 2002/03.
All historic UKPIs since 2002/03 are available from the Publications archive.
See Changes for a full list of changes to the UKPIs over time.
An archive of governance information for the UK Performance Indicators Steering Group is also available for reference.