The latest performance tables for secondary and primary schools in England have been released – with parents and educators alike looking to the tables to understand and compare schools in their area.
Schools will also be keen to see if they have met a new set of national standards set by the government. These new standards now include “progress” measures, which are a type of “value-added measure”. These compare pupils’ results with other pupils who got the same exam scores as them at the end of primary school.
Previously, secondary schools were rated mainly by raw GCSE results. This was based on the number of pupils getting five A to C GCSEs. But because GCSE results are strongly linked to how well pupils perform in primary school, it tended to be that these previous performance tables told us more about school intakes than actual performance. So under the new measures, schools are judged by how much progress students make compared to other pupils of a similar ability.
This means that it is now easier to identify schools that have good results despite low starting points. As well as schools with very able students who are making relatively little progress compared to able pupils at other schools.
But even with these fairer headline measures, the tables still tell us relatively little about school performance. This is because there are serious problems with the use of these types of “value-added measures” to judge school performance – as my new research shows. I have outlined the main issues below:
Intake biases
Taking pupils’ starting points into account when judging school performance is a step in the right direction, because this means that schools are held accountable for the progress pupils make while at the school. It also focuses schools’ efforts on all pupils making progress rather than just those on the C/D grade borderline which was so crucial for success in the previous measure.
But school intakes differ by more than their prior exam results. My study finds that over a third of the variation in the new secondary school scores can be accounted for by a small number of factors such as the number of disadvantaged pupils at a school, or pupils at the school for whom English is not their first language. This means the new measure is still some way off “levelling the playing field” when comparing school performance.
Shutterstock
In my research, I examined how much school scores would change if these differences in context were taken into account. While schools with a “typical” intake of pupils may be largely unaffected, schools working in the most or least challenging areas could see their scores shifting dramatically. I found this could be by as much as an average of five GCSE grades per pupil across their best eight subjects. And these are just the “biases” we know about and have measures for.
Unstable over time
My research also replicated previous research which found that secondary school performance is only moderately “stable” over time when looking at relative progress. This can be seen in the fact that less than a quarter of the variation in school scores can be accounted for by school performance three years earlier. I also extended this to primary school level where I found stability to be lower still.
The recent “value-added” progress measures are slightly more stable than the former “contextualised” measure – which took many pupil characteristics as well as previous exam results into account. But given “biases” relating to intakes, such as strong links with pupil disadvantage, higher stability is probably not a good thing and most likely reflects differences in school intakes. The real test is whether the measure is stable when these “predictable biases” are removed.
Poorly reflect range of pupils
League tables by their very nature give the scores for a single group in a single year. This means the performance of the year group that left the school last year (as given in the performance tables) reveals very little about the performance of other year groups – and my research supports this. I looked at pupils in years three to nine – ages seven to 14 – to examine the performance of different year groups in the same school at a given point in time.
Shutterstock
I found that even the performance of consecutive year groups – so years six and five – were only moderately similar. For cohorts separated by two or more years, levels of similarity were also found to be low. This inconsistency can also be seen within a single year – where even very high or low performing schools tend to have a huge range of pupil scores.
This all goes to show that school performance tables are not a true or fair reflection of a school’s performance. While there is certainly room to improve this situation, my research suggests that relative progress measures will never be a fair and accurate guide to school performance on their own.
By Tom Perry, Visiting Lecturer, University of Birmingham. This article was originally published on The Conversation. Read the original article.
Comments