Select Committee on Children, Schools and Families Third Report


3  TARGETS AND PERFORMANCE TABLES

63. In the consultation document Making Good Progress, the Government refers to "the framework of tests, targets and performance tables which have helped drive up standards so sharply over the past decade".[93] Key Stage tests are used to generate data on pupil performance at the end of each Key Stage. This data is then collated and used to "measure trends across time, across schools, and by almost every conceivable characteristic of the pupils". The Government's main focus at present is on improvement measured by average achievement across a school at the end of each Key Stage and, to this end, schools are given targets.[94] The results for each school are aggregated into Achievement and Attainment Tables which allow comparison of school results, either in terms of absolute test scores or in terms of Contextual Value Added scores (a measure of school performance adjusted for a variety of factors which have been statistically shown to have an impact on an individual's progress: see further paragraphs 90-103 below). These tables are also referred to as "performance tables" (the term we have adopted as a shorthand) and "league tables". The latter term is, perhaps, less accurate because the tables published by the DCSF do not rank schools in order of achievement, although ranked tables are compiled and published by the news media annually.[95]

64. The Government states that schools are provided with a number of tools to understand and track attainment:

  • the DCSF provides tools to track the attainment of individual pupils adjusted for their starting point (value added) and for social factors (Contextual Value Added);
  • Local Authorities provide Fischer Family Trust estimates of future pupil attainment, based on prior attainment;
  • the DCSF provides schools with the Autumn Package, an annual set of data, together with an analytic tool, the Pupil Attainment Tracker, to assist in measuring the progress of individual pupils;
  • these materials can now be accessed online through the RAISEonline system (Reporting and Analysis for Improvement through School self-Evaluation).[96]

65. Witnesses to this inquiry have, however, challenged the Government's assertions that tests, targets and performance tables have driven up standards. The NASUWT said that:

    There is little evidence that performance tables have contributed to raising standards of attainment. A growing number of international studies show that other comparable education systems, including those in Wales, Scotland and Northern Ireland, have reached and maintained high educational standards without use of the performance tables.[97]

66. The NUT argued that there is no evidence that they have had such an effect. The ATL (Association of Teachers and Lecturers) also noted the Government's assertions of improving standards, but questioned "whether this means that our pupils are learning more and better". It referred to research at Durham University suggesting that pupils who reach Level 4 at Key Stage 2 do not retain what they have learned over a period of six months to a year.[98] The ASCL (Association of School and College Leaders) considers that the aggregation of individual test scores creates a high-stakes testing system which will inevitably create a false picture of progress.[99] The ASCL argues further that the Government has produced no evidence to support the contention that targets and performance tables have driven up standards in recent years, a contention "which has taken on the aspect of a dogma".[100] The ASCL considers it more likely that increased investment in the education system, leading to better leadership, staffing and facilities, has led to improved performance.[101]

Targets

67. Targets for pupil attainment are set with the intention that children should meet the expected levels for their age in the core subjects of English, mathematics and science. Ralph Tabberer, Director General of the Schools Directorate at the DCSF, emphasised that targets are set at a level which ensures that the pupil achieving it is ready to move on to the next stage of schooling. For example, Level 4 is set as the target at the end of Key Stage 2 because that is the level which ensures the pupil is ready to move on to the secondary curriculum.[102]

68. The IPPR (Institute for Public Policy Research) notes the considerable emphasis the Government has placed on what the IPPR terms the 'standards agenda', that is, increasing the proportion of pupils achieving target levels in Key Stage tests and on minimising the number of schools who do not meet the targets. The Government has placed its faith in a "quasi-market in school places", in which parental choice should drive up standards, with targets and performance tables placed at the heart of this mechanism. Value added measures and CVA are intended to provide more context for the raw results.[103] The IPPR continues:

    Results are now used to inform school decisions about performance-related pay, to inform Ofsted decisions about whether schools should be given light or heavy touch inspections and, combined with targets, to inform judgments about the efficacy of educational initiatives such as the Primary Strategies.[104]

69. Further, the IPPR notes that test data is now aggregated into sophisticated data banks, most recently RAISEonline, allowing for performance monitoring, progress evaluation, problem diagnosis, resource allocation and target setting based on "a full understanding of what might be achievable"[105]. This, it is argued, has facilitated top-down performance management by government, but also allows local authorities and schools to set attainment targets for their pupils, to assess pupil progress against that of similar pupils elsewhere and to assess school results against national comparators. It is even possible to make comparisons at the level of individual test questions. The IPPR considers that this is a powerful tool for "supporting informed and rigorous self-management".[106]

70. Targets, according to the Government, are a primary means of focussing the efforts of schools and teachers on achieving these ends. Making Good Progress sets out clearly the Government's aims for the education system and the relevance of targets:

    The reason for pursuing higher standards is not in order to achieve numerical targets or to deliver accountability. Useful and necessary as these are, they are the servants and not the masters. The data and targets we set are the means towards the objective of equipping pupils with the skills and knowledge they need: education for self-fulfilment, access and equality. So it is important that we use our data and set our accountability targets to achieve the ends we most value.[107]

The Minister, Jim Knight, told us that schools take their performance in tests very seriously and that this drives the priorities of English, mathematics and science. He stated that there was evidence that sharp accountability has contributed to improvement in those areas.[108] The DfES stated that:

    The publication of threshold measures of performance is a strong incentive for schools and colleges to ensure that as many pupils/students as possible achieve the required standard, particularly at Key Stages 1-3, in the core subjects of English, mathematics and science.[109] […]

    Used together, threshold and CVA measures provide a powerful tool for school improvement and raising standards across the education system, enabling us to track changes in performance over time nationally and locally, and at school and individual pupil level […]"[110]

71. The Minister told us that targets have been put in place so that 30% of pupils are expected to achieve five GCSEs at grades A* to C, including English and mathematics. The Minister said that, a decade ago, half of all schools did not have more than 30% of pupils achieving five GCSEs at grades A* to C, but that figure is now down to 21% of schools. This measure is used to target schools needing improvement.[111]

72. Much of the criticism directed at national tests actually derives from the distortions created by performance targets. When the current administration assumed responsibility for the 'delivery' of education, the accountability structures which were put in place were based on pupil performance in national tests. Pressure was applied to the system by means of targets and performance tables, with educational outputs regularly measured and a wide variety of strategies and initiatives put in place to increase productivity.[112] Viewed in this light, targets based on national, summative test results are not the servant but are the engine which drives productivity in the education system.

73. Test results are not the output of education, but a proxy for the education taking place every day in classrooms across the country. OCR, one of the Awarding Bodies accredited by the QCA, argues that problems arise when test results, designed to measure pupil attainment, are used as a proxy measure for different purposes:

    The use of qualifications in school performance tables, national targets, OECD comparisons etc leads to misinformation and drives undesirable behaviours.[113]

74. When high-stakes are attached to the proxy, rather than the education it is meant to stand for, distortion may occur in the shape of teaching to the test, narrowing of the taught curriculum to those subjects likely to be the subject of examination and an inappropriate proportion of resources diverted to pupils on the borderline of achieving the target standard, to the detriment of both higher achievers and of those with little or no hope of reaching the target, even with assistance.[114] Brian Lightman, President of the ASCL, told us that:

    […] if your target focuses on five grades A* to C, inevitably, the focus will be on those with four and who are nearly heading towards the fifth. You will concentrate on giving those children the extra help. […] The [other] children […] who do not quite fit into those categories, will be left out. That has been one of the major shortcomings of this target-setting culture over many years. For example, the focus of GCSEs has been very heavily on the C-D border line, and not, for example, on students underachieving by getting a grade A, but who could hopefully get an A*, or on those getting a B, but who could be helped to get an A.[115]

Even the Minister, Jim Knight, admitted that:

    At the end of Key Stage 2 there is too much focus in some schools on people on the margins of a level 4 and not on the rest, because that is where the measure is.[116]

The effect of concentrating on borderline pupils can be pernicious for the individual. The Association of Colleges stated that, whilst a pupil may have the necessary grades to progress to the next level, if that learning is shallow, focussed only on passing the test, they may not have a full grasp of the necessary concepts or sufficient intellectual rigour to deal with the demands of the next level. They conclude that "This raising of false expectations resulting in a sense of inadequacy may well account for the high drop out rate at 17".[117]

75. Targets have had the effect of refocusing effort on maximising a school's achievement, for which it is held accountable, rather than crediting the achievements of individual pupils. This is because, using the mechanisms of teaching to the test and narrowing the taught curriculum, it is possible to inflate test scores without improving the underlying education of the children taking those tests.[118] The ATL cites a study which finds that a focus on the achievement of a particular level, together with booster classes, may have the effect of assisting pupils to achieve a Level 4 in mathematics, for example, but that this level is not sustained over a period of six months to a year.[119] Professor Peter Tymms of Durham University told us that externally-imposed targets aimed at complex processes, such as teaching and running a school, are less effective than targets aimed at simple tasks which are easily quantified. In his view, targets for schools should come from within rather than being imposed externally.[120]

76. Section 2 of Making Good Progress gives an indication of the Government's view of attainment against targets. The national results for pupils in English at Key Stages 2 and 3 are set out, with Level 4 being the expected standard at Key Stage 2 and Level 5 the expected standard at Key Stage 3. In both cases, the 2008 target is that 85% of all pupils should reach the expected standard by the end of those Key Stages. The figures presented suggest that there is some distance to cover to reach these targets: in 2006, 79% of pupils achieved Level 4 or above at Key Stage 2; and in 2005, 74% of pupils achieved Level 5 or above at Key Stage 3. The Government states that those not meeting expected standards are "moving too slowly", and that "It is disappointing that they have made such slow progress". The language used to refer to these struggling pupils is that of "success" and "failure", for example, "[…] by no means all of [those children] with SEN had severe neurological problems effectively preventing success"; and "[…] apart from severe neurological disorder, none of these characteristics necessarily results in failure". But most parents want their children to make the best progress they are capable of making. We warn later (paragraphs 212-218) against developing a crude national progress standard which demands that every child progresses at the same rate, but nevertheless feel that a more subtle measurement of progress could act as the spur to improved teaching and learning.

77. When the assessment levels, for example Level 4 for 11 year-olds, were developed in the 1980s, they were the levels achieved by the average child. In the following years they have become a minimum expected standard, thus creating an expectation that improved teaching and learning can enable most pupils to achieve what used to be average levels. In this light, the Government's proposals in Making Good Progress that pupils should progress by two levels in one Key Stage is an even more challenging target and we are concerned that it runs counter to the Government's policy on personalisation.

78. The Government's point is, quite reasonably, that by no means all of those children falling short of expected standards are necessarily incapable of achieving them and that different teaching and learning strategies may improve their performance and help them to fulfil their potential. However, we think that the language of 'success' and 'failure' highlights a problem with the standards agenda which the Government's reasoning does not address. The NAHT challenged the Government's approach, pointing out that children learn at different rates and in different ways. Some children will easily surpass the expected standards at the end of a Key Stage and others will need much longer to reach them. Schools should focus on assisting children to reach the goals which are appropriate for them as individuals. The NAHT concludes by stating that:

    We must not label as failures 11 year olds who learn more slowly or who have skills in different aspects which cannot be described in such concepts as "level 4". What is a level 4 Happiness or a level 5 Social Responsibility? How can we expect a certain, arbitrary percentage to succeed or fail? More importantly, why should we?[121]

79. Mick Brookes, General Secretary of the NAHT, stated that target-setting is of extreme importance to individual children and should not be controlled centrally by government:

    If you set targets too high, the child cannot do it, becomes frustrated and disconnects. If you set that target too low, the child becomes bored and disconnects; they then leave school as soon as they can—24% of them. So target-setting is a very individual and personalised event.[122]

The ATL similarly expressed concern that the perceived importance of targets, especially Level 4 at Key Stage 2, is so strong that many pupils who do not reach that level feel like failures. Other witnesses have made related arguments in relation to children with special educational needs. It has been argued that their 'failure' to meet national targets leads to them being marginalised and devalued.[123] Worse, because of the way school accountability is tied to test results for each school, these individuals are seen as a burden on the school itself as they drag down its aggregated test scores.[124]

80. The NUT drew our attention to the EPPI (Evidence for Policy and Practice Information) study (2004), on the impact of repeated testing on pupils' motivation and learning. The review concluded that repeated testing and examination demotivated pupils and reduced their learning potential, as well as having a detrimental effect on educational outcomes. Other key findings included evidence which showed that teachers adapt their teaching style to train pupils to pass tests, even when pupils do not have an understanding of higher order thinking skills that tests are intended to measure and that National Curriculum tests lower the self-esteem of unconfident and low-achieving pupils.[125]

81. We endorse the Government's view that much can and should be done to assist children who struggle to meet expected standards. However, we are concerned that the Government's target-based system may actually be contributing to the problems of some children.

82. We believe that the system is now out of balance in the sense that the drive to meet government-set targets has too often become the goal rather than the means to the end of providing the best possible education for all children. This is demonstrated in phenomena such as teaching to the test, narrowing the curriculum and focussing disproportionate resources on borderline pupils. We urge the Government to reconsider its approach in order to create incentives to schools to teach the whole curriculum and acknowledge children's achievements in the full range of the curriculum. The priority should be a system which gives teachers, parents and children accurate information about children's progress.

Performance tables

83. The national test results for each school are aggregated into performance tables, published by the DCSF, which allow comparison of school results, either in terms of absolute test scores or in terms of Contextual Value Added scores (see further paragraphs 90-102). The tables also allow for instant comparison of a school's results against both the local authority average and the national average. A limited amount of background information is given for each school, including the number of pupils on the school roll and the number and percentage of pupils with special educational needs ("SEN").[126] The DCSF does not rank schools in order of achievement although, as we have mentioned above (paragraph 63), ranked tables are compiled and published by the news media. Although some have argued that performance data should not be published in tables, others have countered that, if the Government did not do it, the media would.[127] The Minister told us that he thought it was better that this data should be published in a controlled and transparent manner by the Government, rather than leaving publication to the media. He thought that there would be "an outcry that we were hiding things" if the Government did not publish performance data on schools.[128]

84. One of the major criticisms of performance tables is that they do not provide a true reflection of the work performed in schools.[129] A significant aspect of this argument relates back to the discussion in Chapter 2 on the purposes of testing. If national tests are primarily designed to measure pupil attainment at particular points in their school careers[130], can they be said to measure the performance of schools with a sufficient degree of validity? The answer to this question depends on what is being measured. If school performance is viewed purely and simply in terms of its pupils getting the highest possible marks, especially in comparison with other, similar schools, then test results might be a valid measure of school performance. If, on the other hand, school performance is seen as rather broader than this, to include teaching of a full and rounded curriculum, artistic, cultural and sporting activities and good pastoral care, as well as academic achievement, then aggregated test results are not a valid measure of school performance, because they only measure one, narrow aspect of it.[131] As the Advisory Committee on Mathematics Education stated:

    Clearly the league tables only measure what is tested and not wider aspects of school performance.[132]

85. The NAHT similarly argued that:

    Even modified by social deprivation or value added factors, [performance tables] can only give a distorted snapshot of the work of a vibrant and organic community.[133]

86. We were given an extreme example of this by Mick Brookes, General Secretary of the NAHT. He told us about a school which persistently languishes at the bottom end of the performance tables but whose Ofsted report paints a picture of excellent leadership, good improvement in all areas of school life and a strong commitment to the personal development of all pupils. Mr Brookes continued:

    There are very good schools that work against the odds to produce higher educational qualifications than they have ever had in their areas, but they are disabused of that excellent work on an annual basis.[134]

87. The NAHT stated that, although national tests provide statistically valid results at a national level, individual school cohorts may be small in size, with a single pupil counting for more than 15% of the overall score for a school. Statistically, this means that the ranking of a small school according to data in the performance tables can be extremely volatile from year to year, depending on the cohort being tested.[135] The NAHT therefore calls for care in the interpretation of test results at school level and argues that test data should be used as one indicator, alongside many others, of school performance, rather than as "a determinator".[136]

88. The GTC argued that parents' desire for information about their children's learning and progress will not be best served by the single measure of Key Stage test results. The information presented in the performance tables do not tell parents and the local community the full story about the education taking place in a school. The GTC cited evidence which indicates that, when parents make judgements about the quality of a school, they do not use the school's position on published league tables as their main criterion (GfK NOP Social Research 2005). Parents, it argues, require broadened and enriched sources of information about their local schools.[137] Keith Bartley, the Chief Executive of the GTC, told us that MORI research in 2005 showed that parents attributed low value to performance table data when determining their choice of school partly because they found the information confusing.[138] The Minister disagreed with this view, stating that "They are very simple and easy to understand".[139]

89. We endorse the view, put forward by many witnesses, that the data presented in the performance tables give only a very limited picture of the work which goes on in a school. It is, therefore, never appropriate to rely on this information alone when forming a judgment about a school's overall performance.

90. Comparison of schools based on raw test scores is, we think, self-explanatory. We shall, however, explore in a little more detail the use of Contextual Value Added scores ("CVA") and what they mean. CVA scores at school level are, essentially, a measure of progress over time from a given starting point and they provide a means of assessing the relative effectiveness of a school. Unlike the raw test scores presented in the performance tables, it is not a measure of absolute attainment. The period of the progression measurement will be given for each score, for example Key Stage 1-2 or Key Stage 2-4. Although CVA is a measure of progress based on prior attainment, it is adjusted to take account of a variety of factors which have been statistically shown to have an impact on an individual's progress.

91. The first step is to calculate a CVA score for each pupil. These scores are then used to calculate the CVA score for the school as a whole:

THE PUPIL CALCULATION

THE SCHOOL CALCULATION

Simply put, a pupil's CVA score is effectively the difference between a set of results predicted according to the CVA model and the results actually achieved. A school adding value will enable the pupil to outperform the results expected of him or her according to the CVA model. In order to calculate the CVA score for a school as a whole, the average CVA scores of its pupils is taken, then an adjustment is made for the number of pupils in a school's cohort (the shrinkage factor).

92. This provides a measure of school effectiveness. The Government said that:

93. The CVA measure is only an estimate, even within its own, limited terms. The score is based on a prediction which is, in turn, based on the actual attainment of a pupil in a given exam on a given day. On another day with the same pupils, a school may well have achieved somewhat different results. This degree of uncertainty is reflected in the confidence interval, which is provided in the performance tables alongside the school's CVA score. The confidence interval is, essentially, the range of scores within which one can be statistically confident that the "true" school effectiveness (according to the model, at least) will lie. It gives a measure of the uncertainty inherent in a school's CVA score and the size of the confidence interval will be determined by the number of pupils in the calculation.

CVA information for each school is presented on the DCSF website, illustrated by the following example:


Source: DCSF website; Achievement and Attainment Tables.

94. The major issue which arises out of CVA is, as with performance tables generally, that it is not readily understandable to the layman, and parents in particular. It is not clear from the table above what the practical difference is between a school with a Key Stage 1-2 score of 99.5 and a school with a score of 99.9. In fact, other tables available on the DCSF website show that the absolute results for School A are low, whereas the absolute results for School B are high. The implication of the CVA scores is that both schools are similarly effective, albeit with very different intakes (School A has a high number of SEN pupils, School B a relatively low number, according to yet another table). However, this interpretation is not obvious unless one undertakes a thorough analysis and comparison of several tables together. It follows that the intended use of CVA scores, to place the absolute results in context, is diluted because many do not know how to interpret them.

95. We put this concern to the Minister. He told us that:

    I do not think that it is that difficult to understand that in CVA terms, 1,000 is the norm. If you are above 1,000, you are adding value better than the norm. If you are below 1,000, you are adding value lower than the norm. If that is all people understand, then it is pretty straightforward.[141]

Mr Tabberer added that, in publishing performance tables, including CVA scores, the Department is "following the principle of being transparent about all of the analyses so that parents can access the information that they understand or the information that they want". He said that the Department tries to ensure that the public can see comparators and benchmarks and that CVA scores were considered a fair means of comparison.[142] We do not take issue with this, although we have already noted the limitations of any kind of evidence based on test scores alone. We cannot agree, however, that the meaning of CVA scores, as they are presented in the Department's own performance tables, is by any means obvious.

96. We consider that CVA scores are important because there is a strong correlation between the characteristics of a school's intake population and its aggregated test results and CVA attempts to make some compensation for this. Schools with an intake of lower-performing pupils will do less well in the performance tables of raw scores than schools with an intake of higher-performing pupils.[143] Mick Brookes put it bluntly, claiming that performance tables simply indicate "where rich people live and, sadly, where poor people live as well".[144] Dr Mary Bousted, General Secretary of the ATL, told us:

    Variation between schools is much less than variation within schools. We know a school can make about 14% of the difference; the rest of the determining factors on a child's achievement come from their background, actually. And 14% is a lot; I am not undermining what a school can do. However, that means that schools in the most challenging areas have to work extremely hard to get the results that they do.[145]

97. CVA scores, then, go some way towards levelling out these inequalities. However, they are not a transparent measure, and it is not easy to judge the validity of the variables used.[146] CVA is still a relatively blunt instrument for making comparisons, based as it is on the limited dataset of test results and on a series of assumptions and generalisations about a pupil's background. Nigel Utton, Chair of Heading for Inclusion, illustrated this point:

    In the case of my own school, by removing two children with statements for educational needs from the statistics we move from being significantly below the national average to being within normal boundaries. If the contextualised value added measures were sufficiently sophisticated, that would not be possible, as it would weight children with SEN to factor out such a discrepancy.[147]

98. Whilst we consider that Contextualised Value Added scores are potentially a valuable addition to the range of information available to parents and the public at large when making judgments about particular schools, we recommend that the information be presented in a more accessible form, for example graphically, so that it can more easily be interpreted.

99. We are concerned about the underlying assumptions on which Contextualised Value Added scores are based. Whilst it may be true that the sub-groups adjusted for in the Contextualised Value Added measure may statistically perform less well than other sub-groups, we do not consider that it should accepted that they will always perform less well than others.

100. In addition to these specific recommendations about Contextual Value Added scores, we recommend that the Government rethinks the way it publishes the information presented in the Achievement and Attainment Tables generally. We believe that this information should be presented in a more accessible manner so that parents and others can make a holistic evaluation of a school more easily. In addition, there should be a statement with the Achievement and Attainment Tables that they should not be read in isolation, but in conjunction with the relevant Ofsted report in order to get a more rounded view of a school's performance and a link to the Ofsted site should be provided.

101. We have received some evidence that Ofsted places considerable weight on test scores when making judgments about schools under the new, lighter touch, inspection regime. The NUT said that Ofsted relies on CVA as the baseline measure for school evaluation.[148] Heading for Inclusion similarly stated that Ofsted inspections:

    […] focus almost entirely on a school's ability to produce high results in tests at various stages, whether they be Key Stage SAT results or GCSEs. This has led to schools devoting much of their time to 'playing the game' and teaching the children to pass the tests.[149]

Other witnesses also expressed concern that Ofsted uses information in the performance tables as key inspection evidence.[150] Cambridge Assessment made the point that the new Ofsted inspection regime is far more dependent on national assessment data than previously. Although Cambridge Assessment stated that the new regime has been broadly welcomed by schools, it argued that the regime fails to take into account the essential weaknesses in these data.[151] The IPPR gives a measured assessment of the situation:

    […] the results of national tests are a critical input into Ofsted inspections, and a bad inspection may result in a school being issued a notice to improve, or risk being placed in special measures. Entering special measures means that a school loses its autonomy and represents a severe criticism of the leadership of the school. […]

    It is quite right that there should be a robust inspection mechanism to provide schools with powerful incentives to improve, and especially to ensure that no school falls below a minimum acceptable standard. However, if test results are to play an important role in such a powerful incentive mechanism, it is all the more important that they are robust, valid, and do not negatively impact on other desirable aspects of the learning environment.[152]

The IPPR added, however, that it is important not to overstate these arguments and that Ofsted does take into account a wide range of other factors in its inspections.[153]

102. The scope of this inquiry does not extend to a thorough examination of the way Ofsted uses data from the performance tables under the new, lighter touch, inspection regime. However, we would be concerned if Ofsted were, in fact, using test result data as primary inspection evidence in a disproportionate manner because of our view that national test data are evidence only of a very limited amount of the important and wide-ranging work that schools do.

103. So far, we have considered objections to performance tables, including CVA measures, based on arguments that they are not a valid measure of the performance of schools judged across the full range of their activities; and they are not readily understandable by those who may wish to use them, especially parents. However, the most serious and widespread objection to performance tables is, as with performance targets, the distorting effect that they have on the education which takes place in schools.

104. The Government states that the performance tables are "an important source of public accountability for schools and colleges".[154] The use of performance tables for school accountability means that a school's standing in the performance tables is a matter of significant importance to that school, directly or indirectly affecting the morale of pupils and teachers; the attitudes of parents; the school's standing in the local community and within the wider local authority; the resources allocated to it; and perhaps even the school's very survival. The stakes, as many witnesses have pointed out, are high.[155]

105. The evidence we have received overwhelmingly suggests that these high-stakes lead to serious distortion of the education experience of pupils (and see further Chapter 4): teaching to the test, narrowing of the taught curriculum and disproportionate focus on borderline students.[156] Witnesses have commented that the use of performance tables as accountability measures has had the effect of "undermining good practice in many classrooms"[157] and has encouraged a "risk-averse culture"[158]. Performance tables "depress and demotivate teachers who struggle to make children achieve grades they are not quite ready for".[159] The NASUWT told us that the practical effect of performance tables:

    […] is to contribute to a skewing of the curriculum, generate unacceptable levels of pressure and workload at school level and entrench a competitive rather than collaborative culture between schools. They are also responsible for many of the pressures that inhibit the ability of teachers to exercise an appropriate level of professional discretion and autonomy.[160]

Professor Tymms argued that:

    We are forcing teachers to be unprofessional. League tables are an enemy of improvement in our educational system, but good data is not.[161]

106. We consider that schools are being held accountable for only a very narrow part of their essential activities and we recommend that the Government reforms the performance tables to include a wider range of measures, including those from the recent Ofsted report.

107. We have considered in this Chapter some of the issues which arise from the use of national test results for the purposes of accountability and monitoring of schools through performance targets and tables. In the next Chapter, we shall consider in more detail some of the unintended consequences of this regime and suggestions for radical reform of the accountability system.


93   (2006) Department for Education and Skills, Making Good Progress: How can we help every pupil to make good progress at school?, p2. Back

94   ibid. p4 Back

95   For examples, see http://www.timesonline.co.uk/tol/life_and_style/education/a_level_gcse_results/ and http://www.telegraph.co.uk/education/main.jhtml?xml=/education/leaguetables/primary2006/resultsall.xml  Back

96   ibid. p4 Back

97   Ev 247 Back

98   Ev 60-61 Back

99   Ev 48 Back

100   Ev 53 Back

101   Ev 53 Back

102   Q370 Back

103   Ev 234 Back

104   Ev 234 Back

105   Ev 235 Back

106   Ev 235 Back

107   ibid. p4 Back

108   Q367 Back

109   Ev 158 Back

110   Ev 158-159 Back

111   Q339 Back

112   Smithers, A (2007) "Schools." In Blair's Britain 1997-2007. Ed Anthony Seldon, pp 361-384.
Cambridge: University Press, 2007, 978-0-521-70946-0 
Back

113   Ev 122 Back

114   Ev 265 Back

115   Q160 Back

116   Q374 Back

117   Ev 200 Back

118   Smithers, A (2007) "Tests, tables and targets." In Tables, Targets and Testing- It's Time to Trust Our
Schools. NAHT Commission of Inquiry into Assessment and League Tables, pp
50-61, Haywards Heath: NAHT. 
Back

119   Ev 60-61 Back

120   Q27 Back

121   Ev 71 Back

122   Q161 Back

123   Written evidence from Heading for Inclusion, Alliance for Inclusive Education, para 1(c) Back

124   Written evidence from Heading for Inclusion, Alliance for Inclusive Education, para 2(b) Back

125   Ev 263 Back

126   http://www.dcsf.gov.uk/performancetables/; Ev 158 Back

127   Ev 240-241; Q13; Q14 Back

128   Q336 Back

129   Ev 69; Ev 70; Ev 83; Ev 115; Ev 198; Ev 227; Ev 244; Ev 247-248; Q153; written evidence from Lorraine Smith, Headteacher, Western Church of England Primary School, Winchester, para 13  Back

130   Q79 Back

131   Written evidence from Advisory Committee on Mathematics Education, para 25; written evidence from Lorraine Smith, Headteacher, Western Church of England Primary School, Winchester, para 13 Back

132   Written evidence from Advisory Committee on Mathematics Education, para 25 Back

133   Ev 69 Back

134   Q138 Back

135   Q13 Back

136   Ev 68; see also written evidence from Association for Achievement and Improvement through Assessment, para 7 Back

137   Ev 75 Back

138   Q154 Back

139   Q338 Back

140   Making Good Progress, p4 Back

141   Q360 Back

142   Q357 Back

143   Ev 52; Ev 115; Ev 201; Q22; Q23; written evidence from LexiaUK, para 2.7; written evidence from Advisory Committee on Mathematics Education, para 25 Back

144   Q146 Back

145   Q154 Back

146   Ev 115 Back

147   Written evidence from Heading for Inclusion, Alliance for Inclusive Education, para 2(b) Back

148   Ev 262 Back

149   Written evidence from Heading for Inclusion, Alliance for Inclusive Education, para 1(d) Back

150   Ev 48; Q153; Q169; written evidence from Advisory Committee on Mathematics Education, para 3; written evidence from Lorraine Smith, Headteacher, Western Church of England Primary School, Winchester, para 7 Back

151   Ev 216 Back

152   Ev 238 Back

153   Ev 238 Back

154   Ev 158 Back

155   Ev 58; Ev 65; Ev 70; Ev 200; Q224 Back

156   Ev 57; Ev 58; Ev 65; Ev 70; Ev 83; Ev 246; Ev 247; Q224; written evidence from Doug French, University of Hull, para 2.4; written evidence from Advisory Committee on Mathematics Education, para 24; written evidence from Mathematics in Engineering and Industry; written evidence from Institute of Physics, para 6; written evidence from Barbara J Cook, Headteacher, Guillemont Junior School, Farnborough, Hants; written evidence from Association of Science Education, para 18  Back

157   Written evidence from Mathematics in Engineering and Industry Back

158   Ev 57 Back

159   Written evidence from Barbara J Cook, Headteacher, Guillemont Junior School, Farnborough, Hants Back

160   Ev 246 Back

161   Q13 Back


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2008
Prepared 13 May 2008