This post was originally written the week before the 2017 A Level exam results were released (hence the reference to 17th August on the image). I then updated it with the postscript once the results were published. I also re-posted my post on UCAS clearing.
I teach psychology (among other things) and last year I wrote about the Summer 2016 AS exams which were then the first test of the new specification, my teaching of it and interpretation of the assessment criteria. You can read that post here.
This year we’re waiting for the first results for the full two-year Advanced Level exams. While we had a good experience with AS, all those concerns about the first run-through of a Specification are still in my mind as I wait for the Advanced psychology results:
- How will my students perform in the actual exams as opposed to our own assessments based on specimen materials?
- Will performance nationally vary widely from the usual norm, with a large consequent adjustment of grade boundaries (either up or down)?
1. Performance in the actual exams
One of the reasons I opted for the AQA specification was the support this board offered for the new specification including sample assessments, Mark schemes and commentaries. The last time the specification changed the actual exam papers had contained some questions very different in style from the somewhat sparse sample papers. Support from AQA in advance was much better this time, there hadn’t been the same differences in the AS papers, nor were they in the A Level exams this Summer.
There were quite a few widely-reported errors in exams this season, and more recent reporting of the possible impact on students, for example this article from The Guardian on ‘the stress of sitting new untested exams’. Whether or not there were more mistakes than usual, this publicity does seem to have shaken the confidence of many students in the exams process itself.
Although there were no errors in AQA psychology papers, one thing my students did have to contend with was errors in their brand new text books, particularly first print runs of first editions. I’ve seen this before when publishers rush to get texts out for new specifications. There are often mislabelled images, errors in tables, or inaccuracies in the indexing (i.e. mistakes arising in the production of pages, rather than the authors’ text) but this time there seemed to be several factual errors. Much as it gives my ego a boost to be able to show through reference to primary sources that I was right and the textbook was in error, it doesn’t help students (except perhaps to question everything) and shakes their confidence in their reference materials.
2. Will performance vary nationally with unpredicable consequences?
This is a question we will only be able to answer when the results are out. As I wrote in by post about the AS results, such probes have occurred in the past when new specifications have changed, most notably in 2011 (DFE, 2012). This did not seem to be the case for the 2016 AS exams, although more A grades were awarded in psychology. Hopefully this is an indication that Ofqual are on the ball and ensuring a smooth transition between specifications so that students sitting the first year of a new exam will not be penalised.
Nevertheless, whatever the speculation, it’s the actual results that matter. So, like my year 13 students, I’ll be awaiting the A level results a little more nervously than usual this year. I’ll also be hoping that their results, and everyone else’s, will be a true indication of each student’s performance.
Postscript – 18th August 2017
It’s seems that now the results are available that there was not wide variation nationally compared with the 2016 results (see this Ofqual infographic), although the media made much of the fact that more boys than girls received top grades. A* and A grades for the new A levels were slightly down on 2016, with Ofqual stating the changes reflected differences in prior attainment. The proportion of top grades in (unreformed) languages increased as had been previously agreed to counter skewing of results by native speakers. I find it interesting that Ofquals analysis focussed on the top grades.
As for psychology, the proportion of A*/A grades fell 0.3% to 18.8%. There weren’t any shocks as far as the results of my own students went, although a couple did a bit better than I predicted and a couple missed out on a grade. It’s a small number to draw valid conclusions from, but if there was a theme, I think it was that those who worked hard did well, irrespective of their starting point, which must be a good thing.