I wrote this post in August 2015. A year later, anticipating the next batch of GCSE results, I still have the same concerns. As Geoff Barton has said (In this Guardian article), “I used to know what a C grade in English looked like and a grade A. Now it feels as if someone somewhere, in an obscure back office, makes the decision.”
Beating the bounds’ was an ancient practice that still survives in several English parishes. Members of the parish would travel the boundary beating the marker stones with greenery. Before the advent of accurate maps this ritual had the very practical purpose of ensuring that everyone knew the agreed position of the parish boundaries should any dispute arise in the future.
In recent years, controversy over exam grade boundaries has become a depressingly regular feature of results days. For some reason the need to ‘maintain standards’ now seems to require annual moving of the goalposts. I am not referring to adjustments exam board panels make to raw marks to take account of annual variations in the difficulty of papers (which has always occurred and is explained by examiners each year) but the wholesale statistical manipulation of results to maintain the proportion of students gaining a particular grade. Evidence of this often has to be inferred from strange grade distributions, the suspicious clumping of candidates scores just below the C grade boundary, or the realisation that teachers were spot on with there predictions, except for the C/D boundary.
I believe that for students to succeed, it is essential that they know what they will be assessed on and how they will be assessed. That includes the criteria for each particular grade. In recent years Ofqual has become muscular in flexing its power to adjust grade boundaries – somehow maintaining standards by changing them. This has led to the peculiar situation where teachers and students strive for improvement and Ofqual seems to do its best to stamp it out!
At this point you may be thinking ‘but what about grade inflation?’ I appreciate that a function of external exams is to identify the differing abilities of candidates and an exam fails in this function if everyone gains the top grades. So, how can we maintain the credibility of exams when schools are continually improving the learning and exam technique of students through better teaching?
I think the answer is transparency. In the annual ritual of beating the bounds it wasn’t just the priests who marked the boundaries but the whole parish community. The whole point was that everyone knew where the boundary lines lay. In a similar way, it is surely important that all involved in education understand the requirements for achieving a particular grade. If, over time, the spread of results for a particular subject becomes too slanted towards top grades, such that the existing assessment criteria are in danger of becoming unfit for purpose, they should be adjusted. This should be announced in advance of students commencing the course. In this way, teachers, students, and their parents will all know what will be required to secure a particular grade and the public will be aware that that year cohort faces a tougher exam, understanding that a year-on-year comparison cannot be made.
Of course, this already happens. A couple of years ago, GCSE science students in England knew it would be harder to get a higher grade than the year before. This only made it more extraordinary that those same students had to suffer retrospective tinkering with their English Language results after they had sat the exam.
If we have an exam regulator that is adept at monitoring and committed to transparency – if we all participate in an annual beating of the bounds – we should be able to achieve a robust, credible exam system that ensures that students can sit exams in the assurance that what is expected of them will never be changed after the event.