Select Committee on Children, Schools and Families Minutes of Evidence


Examination of Witnesses (Questions 40 - 54)

MONDAY 10 DECEMBER 2007

PROFESSOR SIR MICHAEL BARBER AND PROFESSOR PETER TYMMS

  Q40  Stephen Williams: I heard about it on Start the Week this morning, and someone was pouring cold water on it, saying that factored backwards, it implies the Victorians were stupid, when clearly they were not. If grades have been inflated, and if it is accepted that roughly 90% of those who pass A-levels now go to university rather than straight into work, as was the case when I took them, are A-levels fit for purpose?

  Professor Tymms: You really need to ask what the purpose is. If the purpose is straight selection to university, there is a problem at the top end with that differentiation. We need more differentiation, and if we do not get that right, other systems will come in—people will produce their own American SATs for selection to university, or a new law test. That will undermine the purpose of A-levels, which have been a very good motivator in our colleges and sixth forms. There are some great teachers working in that area, and it would undermine that. There is another question about whether A-levels are fit for purpose. Do they prepare students well for their next stage of study? Again, it is quite complicated. AQA's research committee has been investigating whether that is the case. It has gone to the law departments and psychology departments to find out whether they believe that law and psychology A-levels and so on are useful. There is another issue out there. There are never straightforward answers, but we need to ask the questions. Are the students going on to university actually able to do those kinds of thing? People are always complaining about maths and reading, so we see four-year courses instead of three-year courses because students apparently have not done enough maths. If you are just asking straight whether they are fit for purpose, I do not think that they are fit for purpose at the top end for selection, but for the rest they do pretty well. I should add one other thing about A-level standards. It has to do with the setting of standards over time. I talked earlier about setting standards for key stage assessments over time. The way that it is done for Key Stage 2, for example, is multifarious. There are lots of ways to maintain the standards over time, but one way is to take the students who do the key stage assessment this year and give a proportion of them next year's test secretly to see how they do—pre-testing it with the next people and seeing what level they were given last year. It is not a perfect system, but it is an interesting way to do it. A-levels and GCSEs get no pre-testing. All the standard-setting is done afterwards on the basis of statistical relationships and judgments. No items used last year are used this year. In something like the programme for international student assessment, they do the tests, release half the items and keep some so they can be used next year to standardise next year's test. It is the same with the progress in international reading literacy study. A-levels and GCSEs do not have any pre-testing, which may be an issue that needs to be faced up. Most of the systems in the world have pre-testing.

  Chairman: I am aware that we have two sections to complete this evening, and some of us want to hear Ed Balls in another place later. Sir Michael.

  Sir Michael Barber: I will be brief. In an era when we are moving towards everybody staying compulsorily in full-time or part-time education until 18, which I believe to be absolutely right, A-levels are clearly not the whole answer to the challenge. To pick up on the point about fitness for purpose, we need to get apprenticeships working well. I spent Friday afternoon with some apprentices at the Rolls-Royce plant in Derby—a fascinating conversation. We need to get the new Diplomas to work well. We should make the international baccalaureate available. I am in favour of developing a range of possible qualifications for young people, so that we can have qualifications fit for the whole cohort, all of them have something to aim for and all of them go into the labour market with qualifications that have real value.

  Q41  Chairman: If we want young people to stay on until 18, the natural school leaving age for learning and skills progression, what is the point of having a major exam at 16? Is it not becoming redundant?

  Sir Michael Barber: When the full 14-19 programme is working well, the debate will change. I do not think that we are there yet, but I agree that that might well be part of the debate, absolutely.

  Q42  Mrs Hodgson: I would like to move on to models of assessment, but I have a bit of a cold, so you must excuse my deep voice. I understand that, at the moment, the Government are doing about 500 pilots in schools on Making Good Progress. I understand that currently the main purposes of assessment are listed as points one to four. I just wanted to say something about point four: assessment for learning, improving both learning and teaching. I know that this Committee has heard my views on the personalised teaching agenda and I know that it is making good progress, emphasising more informal teacher assessment and personalisation in teaching. Regarding personalisation of teaching, should it not be specialisation in teaching? I say that because it touches on one of the things that I am concerned about, as the Chairman is well aware. Earlier, Sir Michael, you said, "The sooner you know the problem, the easier it is to fix it." So you probably can guess where I am going. I wonder why, when you were advising the Department for Education and Employment on the literacy hour and numeracy hour, you did not suggest that, when children are identified with, say, dyslexia, there should be specialist dyslexia teachers in every school to work with those children? So, getting back to the models of assessment and bearing my particular interest in mind, do you think that the current Key Stage tests remain the appropriate model of assessment and, if they are not, what alternatives would you suggest?

  Sir Michael Barber: First of all, by the way, when I worked in the Department for Education and Employment on the literacy and numeracy hours and all of that, I had detailed conversations with the Dyslexia Institute and the British Dyslexia Association. Ken Follett, who is very actively involved in that world, was somebody whom I talked to often, and incidentally I still do talk to him. I think that what you say is right, that once you get really good teaching consistently across the cohort of literacy, most children will make progress, and then the ones that have a problem, whether it is dyslexia or something else, will be easier to identify. I think that the problem, if you go back before the literacy and numeracy strategies, was that children who had a problem got muddled up in the cohort, because nobody had invested in the teacher's skills to teach reading, writing and mathematics in the way that they are now generally able to do. So I completely agree with your point. Whether you use the word "personalisation" or "specialisation", I believe very strongly that, as soon as a child is identified as having a problem such as dyslexia, there needs to be specialist people available to advise and help. Importantly, they need to advise the child on how to catch up with the cohort and not sink further behind the cohort. That is really important. I think that the progression pilots that you referred to, which the Government are running now, will effectively involve testing when ready; when the teacher thinks that a child is ready to go to the next level, they will use a single level test. That system has a lot of potential and we talked about it earlier in the Committee. I have been an advocate of just-in-time testing since the mid-1990s, when I published a book called The Learning Game, but they have to get the detail right. That is why I think that it is important that this type of testing is being piloted.

  Professor Tymms: I have talked about the present system, so I will not add to what I have said about that. Let me just pick up on the teacher judgment and the single level test, because I read about that in The Times today and I had read some previous material in tender documents finalising the test data. I just wonder if I have got it right. Apparently, under this system the teachers will make judgments, then the pupils will do the tests and that information will be used to feed in to the information going in to league tables and so on. However, now we have cut off the test, which is security, and we are relying on the teacher judgment, but the teachers will be judged by their judgments. Surely that cannot be the way that the system will operate. That is one thing that puzzles me here. The second thing is that, if we are going to have a single test to do that, we know that, at the moment, the tests, say at Key Stage 2, which I regard as good, reliable, valid tests, have pretty big margins of error when it comes to assessing a particular level of a child. Therefore, by focusing on a single level, they will be less accurate than that. That will be worrying about the quality of the data, so I would be keen to see the results of the trials that are being done and whether that system is viable and produces good, reliable data on those students. I also noted that it suggests two tests a year for a pupil, rather than one, which seems a strange route to take. Thinking more broadly about the personalised and specialised learning, I have some sympathy with what you are saying about the specialised learning, but I also have sympathy for the personalised learning. With regard to the assessment that we use currently for children just starting school, there are some children whose vocabulary levels are extremely low, most are pretty good for children of that age and some children at the top are quite exceptional—some of them start school with higher language levels than some of the 11-year-olds leaving primary school. The teacher of such a class has to deal with that group year on year with that phenomenal range in mathematics, language and reading, and that is mixed-ability teaching, which means that you have to do something different with different children in the same class. There are other models: I mentioned the computerised diagnostic assessment earlier. In fact, in Northern Ireland, from this term, all 900 primary schools will not do SATs, but will do computerised diagnostic assessments that will give information to the teacher on the strengths and weaknesses of individual children so that they can improve that with the feedback. Therefore, there is a different model operating there, and we could look at how those things are operating differently.

  Q43  Mrs Hodgson: With regard to what alternative you would suggest, what jumped out at me was that Making Good Progress has been called a one-way ratchet because the teacher will decide when the child is ready for that test. A child might consistently get bad tests, but if they are re-tested on a good day the ratchet will go up. There is never a chance, however, for the child to be levelled down, so it could just be that they have a good test on a good day. It therefore produces high levels of certainty so that misclassification is minimised, or re-testing of doubtful cases does not happen. I have not got the full details of Making Good Progress, but I do not know if there are any alternatives available instead of the new single level tests.

  Professor Tymms: Yes, within our centre we run the Performance Indicators in Primary Schools project for schools. Many schools do the test with the children every year, and we look at year on year progress. They get flat graphs on that, or computer diagnostic assessments would do that—there are plenty of systems out there. This is just one system, and I really think that we need to look at the trials and the statistics on that to see how well they look. We need to monitor the progress of children and spot them when they are falling by the wayside.

  Sir Michael Barber: Clearly, there are alternative systems. The technical details of the progression pilots need to be worked through to ensure that the problems that you and Peter have drawn attention to do not occur. I think that there is a lot of promise in them, but the detail will be crucial, as I have said consistently. I know that Committees are criticised for travelling, so maybe you could do this by reading papers or by video conference, but if I were you, I would look at what is being done in New York City, Hong Kong, where the secondary curriculum is being completely reorganised, and Ontario, where the literacy and numeracy programme, which was originally modelled on ours, is being built on and taken forward. These examples all have implications.

  Q44  Mrs Hodgson: You mentioned personalised learning. I went on a delegation to Sweden that looked at the free school model that is used there, and I was very interested in how they really do focus on personalised learning, as they stream the children according to ability, not age. You might have one nine-year-old who was in with 11-year-olds for numeracy, but in with seven-year-olds for literacy. The children are mixed up according to their ability, which is very interesting.

  Professor Tymms: In Bob Slavin's Success for All programme, he points to the good research evidence for bringing together children with the same reading age some time in the week. So that is an interesting way forward.

  Sir Michael Barber: I agree with that.

  Chairman: Dawn has waited extremely patiently to ask about the unintended consequences of testing.

  Q45  Ms Butler: Sir Michael, you mentioned our basically being future-proof, and I completely agree: we have to make sure that we teach young people for the future, and the Government are right still to focus on maths, English and science as the core subjects. My first question is about testing. Professor Tymms, you said that it was not the testing, but the pre-testing that was the problem for the younger kids. You then said that there was no pre-testing for GCSEs and A-levels. What are the effects of that amount of testing on children, teachers and schools?

  Professor Tymms: I am using "pre-testing" with two different meanings, so I must clarify that. What I meant in relation to setting standards was that the exam-awarding bodies did not pre-test the GSCE tests before they gave them out for real. What I meant in relation to primary schools was that the schools themselves take past papers and get their kids to redo them. Of course, that happens at GCSE as well—pupils will have mocks and practise this and that. The teachers do lots of previous work, but the pre-test is done at key stage assessments by QCA or whoever is employed to do it; it does not happen at A-level and the rest in the standard setting. That just clarifies the point.

  Q46  Ms Butler: Wonderful. So what do you think the effects of that amount of testing are on children, teachers and schools?

  Professor Tymms: They are multifarious. When you set up a system, you never quite know what is going to happen, and there are lots of unexpected consequences. We have to worry about the focus and the narrowing of the curriculum. Of course, we want to get reading, writing and maths right, but we also want drama and physical activity—we want to keep the children physically active—and there is evidence that that has decreased. In fact, in 2002, with Andy Wiggins, I did a survey comparing Scottish schools and English schools and found evidence of the narrowing of the curriculum, a blame culture in the classroom and so on. We need to watch such things to see what is happening—we need to track and monitor the monitoring. There are unintended consequences, including a focus on borderline children, which is an unhealthy thing. There is a focus on the ones who are likely to get the 4 A*s to C or the children who are not going to get Level 4. Little clubs are therefore set up to work on the borderline children, rather than the child with special needs. Lots of peculiar things go on as a result.

  Sir Michael Barber: When I worked in the delivery unit, we looked at a lot of targets and data sets, and people predicted perverse or unintended consequences. We used to say, "Obviously, you should just predict the ones you think will happen and then we'll check." If you focused on street crime, for example, the police would predict that other crimes would get worse. In fact, that is not what happened, but it is always worth checking those things. On the level boundaries, we found that although the target was about Level 4, the percentage achieving Level 5 rose very rapidly, even though that was not the borderline at stake. Good teaching is good teaching, just as good policing is good policing. I would like to say two other things. Literacy and numeracy underpin the whole curriculum, and unless you get them right in primary school, young people will be held back in all kinds of ways, including in drama and all the other things that really matter. The second thing that I want to say is that, on the whole, the schools that do best academically also do best in a wider set of outcomes, because they are well-run institutions teaching well and doing everything properly. That is not a perfect fit, but it is generally the case. It is absolutely right to focus on literacy and numeracy, but of course you also want the wider curriculum for young people.

  Q47  Ms Butler: That leads me to my next question. Would the performance and so on of schools be improved if we used a separate mechanism, such as reforming Ofsted inspections? You talked about Ofsted looking at all the different variations such as the leadership of schools and so on. Would improving Ofsted inspections improve schools and their overall performance?

  Sir Michael Barber: Peter may want to come in, because he has had strong views for many years on Ofsted, but I think that Ofsted should constantly keep its inspection process under review. Since Ofsted was set up in its current form, it has been a positive influence on the schools system over the past 10 to 15 years, but it can always get better. As implied in your question, it should be the institution that looks at those wider things, including the ethos of the school, which matters so much, and its comments on them should get you in, beneath, below and around the data from the tests. Ofsted should constantly keep its processes under review. My view is that all processes, including leadership training, professional development for teachers and Ofsted, should focus in the next decade on achieving a consistent quality of classroom teaching. I quoted Andreas Schleicher, who said we are doing more of the right things than any other system in the world in England, but we have not yet had the impact on consistent classroom quality, so I should like to see Ofsted, professional development and leadership development all focusing on that, because it is the central challenge for our schools system.

  Professor Tymms: Just before Ofsted changed to its present system, a paper was published by Newcastle university—by Shaw, Doug Newton and others—in which the authors compared the GCSE results of a school shortly after an Ofsted inspection with what it normally achieved. They showed that immediately after the inspection, their results were worse, which is interesting, considering the amount of money that was spent just to frighten the teachers. After that, Doug Newton was called in by Gordon Brown for an interview, and shortly afterwards the money for Ofsted was reduced and we went to the cheaper form of inspection. We need a thorough examination of Ofsted's impact on schools. What is it actually doing? That is exactly your question, but rather than give an opinion, we should deliberately examine it to see what the impact is by looking at schools before and after they have inspections, and tracking them statistically across the country, because it is not clear that inspections are improving schools, although they might be. Neither is it clear that they are damaging schools, but they might be. We need to see that kind of evidence. It is a lot of money and there is a particular theory behind it. Another point that links into that is the view of what matters in the educational system. Michael has been saying that teachers matter, and I agree absolutely. He has also emphasised the importance of heads, but it is not so clear to me that heads are key with regard to reading and maths. In fact, what we have in schools are loosely coupled organisations: the head must influence this or that, and there is the teacher in the classroom. When I undertook a recent examination of 600 secondary schools and 600 primary schools, and looked at their value-addeds and how they changed when the head changed, I could find no evidence for such change at all. Actually, the teacher is the key. The head is vital for other things, such as the morale of staff, the building of new buildings and the design of the curriculum—appointing good staff is one vital thing that the head does—but we need to think about structure. We need to monitor things continuously and always ask what is the impact of what we are paying our money for. Ofsted is one of those things.

  Sir Michael Barber: We can get caught up in metaphors, but the way I see it is that the head teacher's role is like the conductor of an orchestra. They do not play a single instrument, but if they do their bit, everybody else plays better. That is probably what we are trying to do with head teachers, particularly in our devolved system in which heads are given a lot of discretion.

  Q48  Chairman: You have both been in this game for quite some time. A week is a long time in politics, and 10 years is an awfully long time in politics. If you could go back to when you started, what would you do differently, not only to drive up standards—one of you said that the standards are in the heart, rather than just the head—but to increase the ability of children to excel within themselves?

  Sir Michael Barber: In the book I mentioned earlier, Instruction to Deliver, which was published in the summer, I own up to a whole range of mistakes. One reason for my looking slightly quizzical when you asked that question, is that I was thinking, "How long have you got?" I could spend the next hour or so talking about this, but I know that you have other things to do.

  Chairman: We have the book to refer to.

  Sir Michael Barber: First, something in which I was personally involved that I would see as a mistake took place in 2000. After the big jumps in numeracy and literacy that we have been debating, there was a general tendency, of which I was a part, to consider that primary school improvement had happened and that it was then all about secondary schools. That took the focus off, but we were really only at the beginning of seeing that improvement through. Secondly—this is a detail, but it is important, looking back—in the 2000 spending review, we set a new target for primary school literacy, aiming to raise it from 80 to 85%. I think that that was a mistake because we had not reached the 80% target. It was demoralising. I, personally, regret not negotiating more vigorously at the time. If you look in my book you will find a whole list of things that I got wrong. Overall, I am very proud of the contribution that I have been able to make to improving the education system over the last decade. While we could have been bolder and we could have achieved more, I am absolutely confident—I think the data confirm this—that we have the best-educated generation in history. There is much more to do to prepare for the 21st century, but it has been a great experience.

  Q49  Chairman: Something quite interesting that you said earlier was that it is not we who are making these demands—it is the world. It is the competitive global economy and so on. Many countries seem to be responding to that task, not by using testing and assessment and the path that you or the Government have chosen, but by choosing very different ways. People tell the Committee that the curriculum is too narrow, that people teach to the test and that children no longer get the chance to explore a whole range of activities and subjects as they used to do. What do you say to people who say that?

  Sir Michael Barber: Two things. One is that I am certainly not arguing, and that may now be my fate in history, that testing and assessment are the single lever to drive improving standards. They are part of a whole system. The crucial elements are combining the challenge that comes from the testing and accountability system with serious support, investment in teachers' skills, and, as Peter said, giving teachers the capacity to do the job. It is the combination that I believe in. Systems that have pressure without support generally do not succeed and systems that have support without pressure do not succeed either. It is getting the combination right that is the key, particularly when you want to change things. Some systems—Finland is an example—recruit good people into teaching, as they have a high standard among their graduate distribution, and they train them well. Their standards have been established, and have got into teachers' heads so they need less testing as they are already established at the top of the world league tables. If you are going to try to change things, the combination of challenge and support is most likely to get you there.

  Q50  Chairman: Peter, what should they have done that they did not do?

  Professor Tymms: First, they should have taken notice of the research evidence of what works. I do not mean the survey, or what is associated with what works, but what changes were made and where we saw the difference. In particular, I would go for randomised control trials. In reading, for example, there is a wealth of knowledge. We know more about reading and how to help children with reading. That knowledge was more or less ignored when we were making changes, so evidence is importance, and light of that I would go to the experts. When the School Curriculum and Assessment Authority and its precursor, the School Examinations and Assessment Council, were set up, that was done without any test experts at all. It is only now, after the QCA has been put in place, that people are available who really knew about tests and the way forward. Now, the standard has been set properly. When it was done earlier, they would buy some people in and reckon that it could be sorted out. We need experts. When Estelle Morris spoke to the British Educational Research Association meeting a little while ago, she said that while she was Secretary of State she took almost no notice of the research that was around. I find that extremely worrying. We need to take notice of the research, rather than surveys and statements such as "This person is doing better," or "My father said this and therefore it is good for me." We should look at what has been done in randomised controlled trials that have been shown to work. Before we put in new systems we need to trial them and check that they work. When the national literacy strategy was going to be rolled out, a trial was running, which was stopped before the strategy was ready. Everybody had to do something that had not been trialled. Later, an evaluation was made post hoc, when everybody was doing the same thing and it was too late. We need to compare this and compare that. That is really important. There is real knowledge out there. We can evaluate things, and when we put in new systems, we need to track them over time. We need, too, to get good experts.  Above all, we need good teachers. I absolutely agree: we need good teachers and we need to trust them. Perhaps we need to free up the curriculum, and perhaps teachers should experiment with it. To find new ways of working, we have to go outside England. Why cannot we allow in people to look at new ways of working, assessment and so on? They are pretty good people, those teachers. We absolutely rely on them and we should rely on them more.

  Q51  Chairman: When the previous Committee looked at the issue of teaching children to read, we came up with two major recommendations. We tried to check evidence-based policy, and the evidence suggests that if you take any systematic way of teaching children to read, it works. We also said that it was to do with the quality of the teachers. We found that there is very little evidence that anyone ever trained our teachers to teach children to read on any basis at all. The Government then rushed off—influenced by a former member of this Committee, I believe—to set up a Committee that recommended synthetic phonics, which had been trialled only in Clackmannanshire. We were a little disappointed that our recommendations were not fully taken on board.

  Sir Michael Barber: Chairman, I cannot help noticing the imbalance in your questions. You asked me what mistakes I have made and then asked Peter what mistakes I have made as well. I wish that you had asked him what mistakes he has made, but since you did not—

  Q52  Chairman: What mistakes has he made?

  Sir Michael Barber: You should ask him. However, since I have managed to get the floor, I think that basing policy on evidence is very important. I talk a lot about evidence-informed policy, and I believe that the programmes that we have been talking about are among the most evidence-informed policies ever, and we have had better evidence on which to base them. Another question that arises when you are involved in government is "how long you have got?" Looking at the data that we had on primary reading standards prior to 1996 and looking at the challenges of the 21st century—Peter and I are broadly in agreement about this—something had to be done urgently. We took the evidence that was available. There is a great report by Professor Roger Beard—he is now at the Institute of Education—which summarises the evidence base for the literacy strategy. We worked very hard to take all the evidence into account. I have been honest about mistakes that I made, but overall it was one of the most evidence-informed policies ever. Its replications around the world demonstrate that it can be replicated with variations with the same results.

  Q53  Mrs Hodgson: On the point about good teachers, I have recently returned from Singapore where, as in your example of Finland, teachers are recruited from the top 10% of the cohort of university graduates. The Government offer whatever incentives they have to. They also headhunt teachers—they spot them. The education officers monitor graduates. They go up to them and say, "Have you thought about becoming a teacher?"

  The teaching profession is held in much higher regard, and is revered as it was here 50 or 60 years ago. The pay reflects that. Teachers are paid a lot better. There is an incentive, because if students are bright and go into teaching, they might be sent to the UK, where their teaching is funded. They then go back and teach in Singapore. It is interesting that we are not at that stage.

  Sir Michael Barber: That is one of the examples that we use in our recently published report, How the World's Best-Performing School Systems Come Out on Top. We looked at systems on several continents, including the one in Singapore. What you say is absolutely right, with the exception that they do not pay teachers more than here. However, they pay them reasonably well. If you talk to the Singaporean Education Minister, as perhaps you did, you find that they are constantly looking for ways to motivate young people to go into teaching in the future. We have done reasonably well on that over the last few years, but we have a long way to go and can never be complacent about ensuring that we secure really good entrants into the teaching profession, both out of university, and among mature people who have gone into other lines of work and change to teaching.

  Q54  Chairman: Thank you, Sir Michael and Professor Tymms. It has been a really good sitting—a marathon sitting. I am sorry that we have kept you so long, but it has been so absorbing and interesting: we have enjoyed it immensely. I am sorry that we were not an all-party Committee today. It is a great pity that you did not have a slightly broader range of questions, but you did have a fair range. It was two-party, but not all-party. Will you remain in contact with us? If we want to come back and ask you some other questions about the evidence that you have given, will you be accessible?

  Sir Michael Barber: Absolutely.

  Professor Tymms: Sure.

  Chairman: I am glad that we are not paying the full consultancy fee for today. Thank you very much for coming.





 
previous page contents

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2008
Prepared 13 May 2008