Education CommitteeFurther written evidence submitted by AQA (Annex A)

Examination Awarding Process

The setting of grade boundaries is necessary because question papers vary in demand between examination series and between awarding bodies. It is impossible to interpret differences in average marks over time or between awarding bodies. This slight variation is inevitable unless all examination questions are thoroughly pre-tested which would significantly increase the cost of the system.

Thus, grading (or awarding) is the process by which standards are maintained over time and between awarding bodies. Grading draws on a range of evidence from both sophisticated statistical analyses and the judgements of senior examiners, who are experienced subject specialists.

The statistical analyses measure how the general ability of the students entered for the qualification compares with that of students entered in the previous year. For example, in setting A-level grade boundaries, the average GCSE performance of the students entered is compared with that of the previous year. If the GCSE performance of the students across the two years is identical, the statistical evidence would suggest that it is likely that there would be no improvement in A-level outcomes.

If, on the other hand, the A-level had attracted an entry with a much lower GCSE performance than in the previous year, then some decrease in A-level outcomes would be anticipated. These comparisons take into account any year-on-year fluctuations in national GCSE results.

Crucially, these analyses are based on the national outcomes in any qualification and thus ensure that the different awarding bodies’ standards are aligned. If, for example, awarding body A’s A-level in Fictional Studies attracted an entry with lower GCSE results than awarding body B’s A-level in Fictional Studies, then awarding body A would be predicted to have lower grading outcomes than awarding body B. The setting of the grade boundaries would reflect this expectation.

Unfortunately, it is these legitimate differences in outcomes between awarding bodies that are misinterpreted to mean that there are differences in standards between the awarding bodies. It is impossible to compare the raw outcomes of awarding bodies. It is essential to take into account the relative ability of the awarding bodies’ entries.

It is important to note that these predictions are not made at the level of the individual student and a student’s past performance does not determine their grade. Instead, the predictions look at the whole qualification entry, that is, across tens to hundreds of thousands of students.

This makes them highly reliable. This is only one of many forms of technical evidence used to support examiners’ judgments about where to place grade boundaries; others include analyses based on comparisons of the outcomes of those schools and colleges common to the two years which have stable entry patterns, re-sitting rates and the performance of re-sitters, and teachers’ estimated grades.

The statistical analyses provide an extremely helpful context, but cannot detect whether there are actual changes in the performance of the cohort. They provide a starting place for teams of senior examiners to scrutinise carefully the students’ work.

The examiners review the work of students who fell on the previous year’s grade boundary, compare it with the performance of this year’s students, and so select the most defensible and fair grade boundary, taking into account any changes in the demand of the question papers. As noted above, this fluctuation in demand explains why grade boundaries need to change and is not pre-set.

Great care is taken to document aspects of the performance of the students, particularly if the examiners believe that there has been a fall or rise in performance that is not supported by the statistical evidence.

Indeed, any changes in outcomes that exceed just 1% of what we would expect based on the technical evidence trigger an even more robust investigation of both the performance of the students and the reliability of the statistics. This ensures that we can be confident that any changes in outcomes are fair and defensible.

A more complete description of the awarding process can be found in the Guide to Standard Setting.

February 2012

Prepared 2nd July 2012