UNCORRECTED TRANSCRIPT OF ORAL EVIDENCE To be published as HC 169-vi
House of COMMONS
MINUTES OF EVIDENCE
TAKEN BEFORE THE
CHILDREN, SCHOOLS AND FAMILIES COMMITTEE
Testing and Assessment
Monday 18 February 2008
JIM KNIGHT and RALPH TABBERER
Evidence heard in Public Questions 328 - 435
USE OF THE TRANSCRIPT
1.
|
This is an uncorrected transcript of evidence taken in
public and reported to the House. The transcript has been placed on the
internet on the authority of the Committee, and copies have been made
available by the Vote Office for the use of Members and others.
|
2.
|
Any public use of, or reference to, the contents should
make clear that neither witnesses nor Members have had the opportunity to
correct the record. The transcript is not yet an approved formal record of
these proceedings.
|
3.
|
Members who
receive this for the purpose of correcting questions addressed by them to
witnesses are asked to send corrections to the Committee Assistant.
|
4.
|
Prospective
witnesses may receive this in preparation for any written or oral
evidence they may in due course give to the Committee.
|
Oral Evidence
Taken before the Children, Schools and Families Committee
on Monday 18 February 2008
Members present:
Mr. Barry Sheerman (Chairman)
Annette Brooke
Mr. Douglas Carswell
Mr. David Chaytor
Mr. John Heppell
Fiona Mactaggart
Mr. Andy Slaughter
Lynda Waltho
Examination of Witnesses
Witnesses:
Jim Knight MP, Minister for Schools
and Learners, and Ralph Tabberer, Director
General, Schools Directorate, Department for Children, Schools and Families,
gave evidence.
Q328 <Chairman:> Now that people have had time to settle down,
I welcome the Minister for Schools and Learners, Jim Knight, and Ralph Tabberer
to our proceedings. Our inquiry into
testing and assessment is getting particularly interesting. We sometimes say to ourselves that we know
that we are getting under the skin of an inquiry when we feel that we are more
dangerous than we were when we started, because we have a little knowledge. We
have had some very good evidence sessions, and we hope that this one will be of
even more value than the others. Do
either of you want to say anything before we get started?
<Jim
Knight:> As is traditional, I will not make a
statement, because I do not want to delay the Committee. On the letter that I
sent to you today and circulated to other members of the Committee, as certain
portions of the media have shown an interest in this subject, some
clarification might be helpful so that you have more facts to get you beyond
some of the fiction that you may have read in the newspapers. The letter sets out the timetable for
publishing an interim evaluation of the single level tests in the autumn. In
general terms, we are very pleased with the progress of that particular pilot.
Obviously, I will be delighted to answer your questions on that and anything
else that you want to ask.
<Chairman:> Ralph?
<Ralph
Tabberer:> I have no introduction.
Q329 <Chairman:> May
I start us off by saying that this testing and assessment all seems to be a bit
of a mess? We have taken evidence, which you must have read-your officials will
certainly have summarised it for you. We have had so much evidence that shows
that people are teaching to the tests and using the tests inappropriately, and
for outcomes that were never intended. A lot of people have criticised the
level of testing and assessment, and we are looking at whether it is
fundamentally effective in improving the life chances of the children in our
schools.
<Jim
Knight:> As you would expect, I do not agree with you
that it is a mess. Naturally, I have heard a lot of the evidence. I cannot be
accountable for what your witnesses say, but I can offer you a bunch of other
people who might say something different. In respect of teaching to the test,
there is a yes and a no answer. In general terms, we are pretty clear about our
priorities in testing. We want people to focus on maths, English and science
and to get them right, which is why they are the subjects that are tested. In
that regard, we want people to teach to those priorities. However, the vast
swathe of teachers and schools up and down the country use tests appropriately.
In order to help those who do not and to improve best practice generally, we
are investing £150 million over the next three years on assessment for learning
to improve the way in which the tests are used.
In
respect of the charge that tests are used inappropriately or for too many
different things, it could be done differently. As some people argue, you could
judge national performance on the basis of some kind of sample test. I am sure
that that would be fine with regard to judgments around the national
performance of the school system, but testing is about not only that, but
parents being able to see how well their child is doing in the school system,
pupils being able to see how well they are doing against a national comparator
and parents being able to see how well individual schools are doing. If you
want to do those sorts of things, some people would argue that alongside sampling
you would have some form of teacher assessment. However, using teacher
assessment to hold schools accountable would put quite a significant burden on
teachers and assessment, so there would need to be some form of accreditation
on how the assessment is done to ensure that it is fair and transparent and
that it compares nationally. When I
look at the matter and begin to unravel the alternatives and think about how
they would work in practice, I find that the current SATs are much more
straightforward-everybody would understand it.
They are used for a series of things, and there might be some compromise
involved, but the system is straightforward and simple, and it shows what our
priorities are and gives us accountability at every level. I do not think that it is a mess at all.
Q330 <Chairman:> If you look internationally, you will see how
such a system looks like an English obsession.
Most other countries in the world do not test and assess as much as we
do. The Welsh and the Scots do not do
so, and nor do most of the countries with which we normally compare
ourselves.
<Jim
Knight:> I visited Alberta in November and found that
it tests just as much as we do. In
fact, we have shifted on the Key Stage 1 test in the past 10 years, whereas
Alberta has continued with externally marked tests that are conducted on a
single day. Alberta is one, but we
could include Singapore.
Q331 <Chairman:> My brother and sister were born in Alberta,
so I know a bit about it. It is hardly
comparable to England, is it?
<Jim Knight:>
In terms of international comparisons, which is what the question was about,
Alberta is out there alongside the usual suspects-
Q332 <Chairman:> I meant countries like ours, such as Germany,
France, Spain, Italy or the United States.
<Jim
Knight:> Some parts of the United States, such as New
York, do quite a bit of testing. Every
education system is slightly different, and it is difficult to draw such
international comparisons and say that this or that is exactly the same from
one to the other. We have a system of
accountability and testing; some countries test, such as Singapore or Alberta,
but others do not. We think that we
have struck the balance. Ofsted
inspects schools, the Qualifications and Curriculum Authority independently
monitors and regulates the tests, and the Office for National Statistics
independently publishes the results of the tests, so the process is perfectly
separated from Government. There is
evidence that standards are consistently improving as a result of the use of
the tests and there is good accountability to parents, which is important.
Q333 <Chairman:> The Government's watchword when it comes to
education and other policies is "evidence-based". When you look at the evidence, are you sure that the testing and
assessment method, which seems to have been uniquely championed in this
country, is effective? Do you have any
doubts at all about it? Is it
successful? Does it give children in
our schools a better experience and education than that provided by our
competitors?
<Jim
Knight:> I think that it is successful. When I look at how standards have improved
since tests were introduced and since we increased accountability through tests
and tables, I can say that they have worked.
That is not to say that the situation cannot be improved. The Government are piloting, through "Making
Good Progress", single-level tests and testing when ready. As we signalled in the Children's Plan,
finding that those pilots are working may mean that we can evolve SATs one step
further. That does not mean that we
want to retreat from tests.
Q334 <Mr. Slaughter:> Picking up on what the Chairman has
said, if I understood you correctly, you said that in relation to national
policy or, indeed, national standards, which is whether overall standards of
education and learning are rising, there are alternatives to testing every
pupil in every school-in other words, it could be done by inspection, sampling
or the like. A valid criticism might be
that there is too much testing, which distorts the learning process, and that
you could do it another way as regards national results and policy. Are you are defending testing every school
on the basis of the effect on that school?
Did I understand that correctly?
<Jim
Knight:> Yes,
I think that you probably have understood correctly. It is worth saying that no pupil spends more than 2% of their
time taking tests. Assessment,
including tests, is and always will be part of teaching. The question then is whether we should have
national tests and whether the amount of time spent taking and preparing for
national tests is too stressful. I do
not buy that. I think that life has its
stresses and that it is worth teaching a bit about that in school. I do not get the argument. I visit enough schools where tests are used
extremely well by teachers to drive forward and progress learning. In the end, I flatly reject the argument
that there is too much testing.
Q335 <Mr. Slaughter:>
It is the Government, not us, who are thinking of relieving the burden
of vivas in foreign languages.
Obviously, you are sensitive to the stress on the poor dears.
<Jim
Knight:> I am
delighted that you have brought that up.
Certainly, the move that we are making on the oral examination for
modern foreign languages is because of not stress, but standards. Ron Dearing has said that hit-and-miss,
one-off, 20-minute tests in which you are coached to rote-learn bits of French,
or whichever subject is being studied, are not serving us well. Controlled teacher assessment during the
course that tests different scenarios in which people use languages is likely
to improve standards significantly, which is why we want to do it. It is not because of stress.
Q336 <Mr. Slaughter:>
I meant the stress on the examiners-the native speakers listening to their
languages being mangled in those exams.
Let
us talk about individual schools. You
gave the example of information for parents, so that they can look at league
tables and select a school. That is
free-market education, is it not? It
would aid parents in selecting and migrating to schools, particularly if they
have the time, knowledge, access to the internet and all that sort of business
in order to get hold of such information.
Is that not part of the postcode lottery for schools or of the
segregation or decomprehensivisation of schools?
<Jim
Knight:> A lot
of implicit values are tied up in that.
I will not say yes, but it very much informs parents, which is a good
thing. We explicitly want to move to a
position in which parents choose schools, rather than schools choose parents,
and I have debated that with the Committee in the past. We believe in parental choice-we can
rehearse those arguments, if you like-but phrases such as "postcode lottery"
involve separate issues from whether we should publish data about schools. Quite frankly, if we did not publish such
data, there would be an outcry that we were hiding things, and the media would
publish them anyway. I think that it is
better that we put them out in a controlled and transparent way so that they
can be scrutinised by bodies, such as this Committee, rather than leaving it to
the vagaries of how newspapers choose to publish them.
Q337 <Mr. Slaughter:> Looking at the positive side of that,
as far as the Department and the inspectorate are concerned, do you think that
part of the role of testing in individual schools is to identify the
performance of schools and of the teaching staff within them in order to alert
you to failure or underperformance in particular?
<Jim
Knight:> I
write to the top 100 most-improved schools in the country every year. Testing helps me to identify success. I also keep an eye on those that are not
doing so well, and my colleague, Andrew Adonis, does the same-perhaps he is the
bad cop to my good cop. However, the
data help us to manage the system. We
are accountable to Parliament and are elected by the public in order to
continue the improvements of the past 10 years in our education system.
Q338 <Mr. Slaughter:>
I suppose that what I am getting at is that if-you might not be with me
on this-one of the effects of publishing data is that parents who are savvy
enough gravitate towards or even mutate certain schools, which results in more
of a division between good schools and bad schools in an area, that would at
least allow you, or professional educationalists, to identify underperforming
schools and to do something about them through, for example, the academy
programme.
<Jim
Knight:> Yes, it allows us to identify areas where we
need to intervene. If we did not have
the tests and tables, something would be missing from the body of information
that we recommend that parents look at when making decisions about which
schools to choose for their children, but they should not be seen in isolation. They are very simple and easy for people to
understand-they are easier than leafing through Ofsted reports, which we also
recommend-although perhaps not as easy as chatting to other parents in the
neighbourhood or going to visit the school itself, which are the sorts of
things we expect parents to do. However
articulate parents are, and however much technology they have at home, those
are the sorts of things that we expect them to do when choosing schools for
their children.
Q339 <Mr. Slaughter:> One aim
of the academy programme, as I understand it, is to target underperforming
schools, particularly in areas of deprivation, and to put good schools-whether
new or replacement schools-into such areas.
Do you see tests in the same way?
Do they enable you to focus resources on areas of deprivation or
underperformance, rather than simply to present information to third parties so
that they can deal with such things?
<Jim
Knight:> Undoubtedly, they are an indicator that we
use. They are not the only
indicator-we, too, look at Ofsted reports and other factors, such as attendance
rates, when assessing how well a school is doing-but they are undoubtedly the
prime indicator. We have explicitly set
targets for the number of schools with 25% or fewer pupils getting five A*s to
C at GCSE and we now have the targets for 30% to get five higher-level GSCEs,
including English and maths. Ten or 11
years ago, half of schools did not have more than 30% of pupils getting five
higher-level GCSEs including English and maths. That is now 21% of schools, but we have further to go. That measure helps us to target schools, and
we are doing work on that right now.
Q340 <Mr. Chaytor:> Minister, the Department's submission to the
inquiry describes the arrangements at Key Stage 1, saying that, "The child will
not necessarily recognise a difference between the formal tests and tasks
he/she completes for other classroom exercises." If that is important at Key Stage 1, why is it not important at Key
Stages 2 or 3?
<Jim
Knight:> I will let Ralph make a contribution, because
he has been sitting here very patiently, but I would say that there is a
difference. When you look at the age at
Key Stages 1, 2 and 3, there is clearly a significant age difference, and with
that, in general terms, there is a difference in maturity. There comes a point when it is appropriate
to start introducing young people to the pressures of assessment. Those are pressures that we all live with
throughout our educational careers; we have to start getting used to that at
some point, and I think 11 is a better age than seven.
Q341 <Chairman:> Ralph, I hope that you do not feel
neglected. The Minister has said that
you have been very patient. Will you
catch my eye if you want to say something, and we will welcome you in?
<Ralph
Tabberer:> Thank you, Mr. Chairman. I endorse what the Minister has said. We try to take decisions about assessment
that suit the context and the particular teaching and learning environment. We will perhaps look at the use of more
controlled assessment and teacher assessment, where they offer us a better
alternative. That might, for example,
be when young people are younger; there
may be more variation in their performance on particular days, and such
assessments may be more sensitive. We
would also look at the use of more controlled assessment or teacher assessment
in areas such as applied learning.
There are aspects of applied learning in diplomas that will not be as
susceptible to an external test.
Q342 <Mr. Chaytor:> When we get to Key Stage 2, the judgment is
made on tests that last about 45 minutes.
How does that equate with the Minister's criticism a few moments ago of
what is now the old system of language orals?
You said, "We want to move away from the hit-and-miss, 20-minute test in
which you are coached to learn." How
can it be wrong that there is a hit-and-miss, 20-minute test, but right that
there is a hit-and-miss, 45-minute test?
<Jim
Knight:> In respect of the oral examinations for GCSE,
those are the qualifications that you take with you through your life. I cannot remember whether I got a B or a C
for the oral.
<Mr. Chaytor:> I am sure it
was a B.
<Jim
Knight:> Well, I got a C for the written one and a B
for the oral or vice versa. I cannot
remember which way round it was, but I do remember the oral exam. You carry that with you. I cannot imagine that many people remember
their SATs scores-I do not reckon many of us were young enough to take them.
Q343 <Mr. Chaytor:> But the tests determine the primary school's
position in the league tables and the pupil's self-esteem when they enter
secondary school. My question is why
are the Government so hung up on the single test at the end of Key Stage
2.
<Jim
Knight:> It may be that we are not. It may be that if testing when ready and the
single level tests prove effective in raising standards, we will be able to
move to a position in which you have a number of test windows during a
year-there are currently two, but we might be able to develop that further-and
it is not necessarily all about how everyone did on a rainy Monday afternoon in
English and a rainy Friday afternoon in maths at the end of Key Stage 2; it can
be throughout that key stage.
<Ralph
Tabberer:> I add to that that the judgment we are making
is about the context-the type of learning taking place-and an oral assessment
looks to us better placed as a teacher assessment rather than as an external
exam. In relation to the end of key
stage tests, there is also an issue in every assessment of manageability. If we go back far enough in the history of
the testing regimes, at key stages there was experience of using teacher
assessment. That proved, in the early
days of the national curriculum, very unmanageable. It meant that we were losing huge amounts of teacher time to
moderation, which was not proving terribly effective. It is about trying to judge the best kind of measurement-the
manageability of the measurement and the validity of the measurement.
Q344 <Mr. Chaytor:> In getting an accurate picture of a child's
ability in year 6, is it not more valid to have the result of teacher
assessment throughout the year as well as an external test, rather than relying
simply on the external test?
<Chairman:> What is the
point of personalised learning?
<Jim
Knight:> That is the excitement of the progression
pilots. The current situation with SATs
is that everyone takes the test and then the examiner decides which grade you
are at based on your response in that test, whereas the single level test is a
scenario whereby the teacher assessment informs whether the child is ready and
what level the child is put in for, so the test is used as part of teacher
assessment for learning, rather than sitting alongside it as it does at the
moment.
Q345 <Mr. Chaytor:> My other question is this. In moving away bit by bit from the regime
that was inherited in 1997, will you accept that there is a link between a very
rigid testing regime and disaffection and demotivation among children who do
not perform well under that kind of regime?
<Jim
Knight:> I think that it would be a very tenuous
link. You see schools that are
performing very well in very difficult circumstances. Obviously, part of what they are doing in performing well is that
a large number of their pupils are doing well in tests. Why are they doing well? Which comes first, the chicken or the
egg? I think in this case it is getting
the behaviour, ethos and atmosphere in the school right, and getting people
focused on their learning, which means that they are not disengaged. What then subsequently happens is that they
do well in their tests, but my own feeling would be that you would be getting
it the wrong way round if you said that because they are not doing well in
tests, they are disengaged.
Q346 <Mr. Chaytor:> In any high-stakes testing system, 80% pass,
but 20% fail. I am interested in
whether there is any link between the sense of failure and loss of self-esteem
of those who so publicly fail, and disaffection in the early years of
secondary-Key Stage 3-which is a major concern of the Government.
<Ralph
Tabberer:> First I question the premise that these are,
in conventional terms, high-stakes tests.
We normally talk about high-stakes tests as determining for pupils the
school they go on to within a selective system. Within our assessment history, if we go back 20 or 30 years and
look at tests such as the 11-plus, those might legitimately be called
high-stakes tests for children, because they were so determining of the next
stage. We have got medium-stakes tests
for our students that allow them to show what they can do and give them and
their parents a sense of where they are.
They also happen to give us very useful information, as Mr. Slaughter
has indicated, for policy development and accountability.
The
Minister is right to point to the progression tests as an interesting
experiment. What we have been keen to
do is to offer Ministers alternative approaches. We have listened like you to comments over the years about the
possible downside of what you termed rigidity.
We have listened to people talking about the possible effect on the year
6 curriculum and the possible effect on the pace of the whole Key Stage. So looking at a progression test as an
alternative, the idea of actually being able to draw down a test to be ready on
time for pupils may give teachers and pupils a different environment. We think it is appropriate to pilot that,
but we do not think it appropriate to rush for that solution. We want to give Ministers alternative
options, so they can deal with just that sort of question.
<Chairman:> We now move
on. John Heppell will lead us on the notion of centralised control and validity
versus reliability.
Q347 <Mr. Heppell:> Looking at how the Government are addressing
A-levels and diplomas, people might think that there has been a subtle change,
in that whereas we were moving towards greater reliability at the expense of
validity, there has been a slight move the other way. Many universities have said to us that people come to them
without a sufficient breadth of skills in a particular subject. We have heard from examination boards that,
instead of being closed, the questions are now, if you like, opened out, or
open-ended. Obviously there is a
consequence to that, and I know that such things are finely balanced, but can
you confirm that there is a move to ensure that validity goes up a bit in the
rank, as against reliability?
<Jim
Knight:> We want both, obviously, and we will continue
to evolve and improve the A-level as we introduce the diplomas. We are mindful of the criticism, which we
have heard from both employers and universities, that people may be well versed
in the particulars of their subject, in which they perhaps took an A-level, but
they need to do better in terms of some of the wider, softer skills. That is why we are introducing measures such
as the extended project into A-levels.
That is also why we have introduced personal learning and thinking
skills and why the work-related learning that runs through the diplomas in
order to offer what both universities and employers are saying they want more
of from our young people.
Q348 <Mr. Heppell:> Moving on from that slightly, you now have
what is called controlled internal assessment.
However, we are told by our advisers that there has always been
controlled internal assessment. You
have not actually taken the coursework into account in assessing that. You are taking an add-on to the coursework
and assessing that. Is that not the
case? Are you not interfering, from a
centralised position, with what should be a creative process? I understand that you set down the
guidelines fairly rigidly in respect of what the controlled internal
assessment-the very name says it-does.
Is it a lack of faith in teachers?
<Jim
Knight:> No, I do not think that it is a lack of faith
in teachers. Again, we had to design an
examination system that retains the confidence of everyone that it is serving
and those involved, including the pupils-most importantly-the teachers and
parents, the employers and those in further and higher education. We found that an over-emphasis on coursework
in some subjects was problematic, so we have moved away from that. The use of controlled internal assessment
is, perhaps, a halfway house between the examination hall one afternoon and the
continuous assessment of coursework. Ralph,
do you want to add anything to that?
<Ralph
Tabberer:> Again, I think it is a question of looking at
different subjects and seeing which is the right design that suits them. There are some subjects for which coursework
is a natural or highly desirable assessment method-art, for example. There are other subjects for which external
assessments work almost fully. For
example, we have moved more of maths into that realm. We have been trying, with the controlled assessments, to create a
more controlled environment where it is more likely that the assessments made
by one teacher will be replicated by another.
That addresses public questions about coursework, as the Minister
suggests, and about the possibility that there is variability, which affects GCSE
results.
Q349 <Chairman:> Is this an endless search for an accurate
method of evaluating the teaching and the quality of knowledge that the child
assumes? It is endless, is it not? Does it not squeeze out the thing that John
is pushing you on: the creativity-the breadth, depth and the imagination of
it? The Welsh have got rid of it. Are
they struggling because they have got rid of this testing?
<Ralph
Tabberer:> Any assessment system is a design where you
are trying to balance validity, reliability and manageability. You try to get the best design for your
whole system. I think that we have been
very consistent, actually, with the principles that were set out at the start of
the introduction of the national curriculum assessment. We have tried to stick to those principles
in measuring and giving parents and pupils information about what they can
do. We have made changes when there has
been a build-up of concern and we have felt that it has not been possible to answer
that. So we have moved when things have
been unmanageable. We have not been
inflexible. The basics are still there.
Again,
if we find better ways of assessing, we will put those options to
Ministers. I suppose that one of those
areas in future will be IT-delivered testing.
We should certainly keep our eyes open for alternatives that give us the
best balance.
Q350 <Chairman:> Is there a horrible generation in the
Department that read, as I did as a young man, a thing called "The One Minute
Manager", the central theme of which is that, if you cannot measure it, you
cannot manage it? It seems to me that
the Department is still desperate to measure all the time. They do not measure so much in the
independent sector, do they? That is
not the way they get good results, is it?
Not through constant measurement.
<Jim
Knight:> I am a product of an independent school where
I was tested an awful lot. That is part
of a traditional elitist education, I think.
It is an endless pursuit because the economy is ever-changing and we are
ever-changing socially. The environment
in which schools and the education system are operating is ever-changing, and
education has to respond to that. It
therefore has to keep changing. The
environment that I grew up in and in which I went to school was one in which, if
you were lucky, 10 per cent. went to university. However, skills needs have changed, as we discussed in other
evidence sessions. We therefore need to
change the qualifications to respond to that change-and as you change the
qualifications, you change the forms of assessment.
Q351 <Mr. Heppell:> Have you actually come across problems on the
ground? Has somebody that is studying
or somebody that is teaching said, "Look, it doesn't work like this. We need to have more flexibility in the way
we deal with it"? Are there problems on
the ground?
<Jim
Knight:> Specific problems?
<Mr. Heppell:> Problems specific to the controlled
assessment rather than just an assessment of coursework.
<Chairman:> Ralph, perhaps
you should answer that. You used to be
in charge of teacher training.
<Ralph
Tabberer:> I am trying to think of any particular cases
where people have brought up additional problems relating to controlled
assessment, but I cannot think of a piece that does that. In general, though, I am clear that we do
monitor the impact of assessment; we monitor not only the impact on pupils and
schools but the opinions of different parties.
We keep that in view, and as I tried to say earlier we are willing to
change, and when we can we put alternative proposals to Ministers. We think we have something with the
progression tests that might give an alternative approach, and Ministers have
been quick to say, "Well, let's pilot it.
Let us not implement it until we know more about how it might
impact." That is all evidence of us
being intelligent and open. We keep on looking for improved solutions, but not
moving away from the basic principles on which the current model was
developed.
Q352 <Chairman:> The desire to measure is not driving out the
imagination and joy of education?
<Ralph
Tabberer:> It should not, no.
<Jim
Knight:> I thought of this Committee when the
announcements were made last week around the culture of entitlement and the
importance of greater partnership.
Similarly, we have a commitment to sport in the curriculum that we are
developing, and we have a cooking announcement.
Some
of these things are not that easily measured.
The pilots on culture will test how easy it is to measure the five hours
in the curriculum. We are ever-evolving
about this. Some things are much easier
to measure than others, but the pressure that I was talking to John about in
respect of employers and universities around the softer skills is more
difficult to measure-but that does not mean that we are not committed to trying
to work harder to develop that.
<Ralph
Tabberer:> Of all the schools that I have visited, I
cannot think of one that is immensely creative that is not also interested in
the tests and doing their best by them.
I cannot think of a school that I have visited that does well in tests
that does not have a strong creative side as well. Sometimes we set these aspects as alternatives, but I do not
think that that is always fair. There
are plenty of schools that manage to do both very well indeed. They are well led, and they know what they
are doing. There is plenty of evidence
that you can have creativity, a lot of autonomy and a lot of self-determination
by teachers and that you can have a properly assessed system that gives parents
a good account of what is happening in the school.
<Chairman:> Let us drill
down into testing and school accountability with Annette.
Q353 <Annette Brooke:> I want to look at whether the test is
fit for all the purposes that we try to use it for. I do not think anyone would disagree that there should be some
form of measurement of pupils' progress.
But is not the difficulty that the Government are placing so much
reliance on a test that was designed for one purpose but which is now being
used to measure whole school performance?
Do you not have any concerns about that?
<Jim
Knight:> If it were the only measure of school
performance and the only aspect of accountability, one would have to be
concerned that we were putting all our eggs in one basket. But we are not. We have inspection and we look at performance in other
areas. We only test a few subjects
through the SATs. We are still looking
at how people are doing on other things.
We also have national strategies working in some other areas. So I would say that it is a critical measure
but it is not the only measure.
Therefore, I am happy with how it sits.
Q354 <Annette Brooke:> Many parents will focus only on this
as a measure. Personally, I can feel
fairly relaxed that a local authority is looking at the whole school, because
it might set off some warning signs that need to be dipped into. However, the majority of parents are not
going to dip below what they see in their local newspaper. Therefore, do you not think that this
measure is harmful to the idea of parental choice?
<Jim
Knight:> It goes back to what I was saying
before. We do not publish ranked
tables; the newspapers choose to rank locally what we publish. I have spoken to various people who work in
the media who were a little sceptical about whether they should publish the
tables, but when they saw how well rival newspapers sold when they published
them they soon leapt at it and published them too. There is no doubt in my mind that if we did not publish the tables
someone else would. As I said before,
if we as a Department do it, we can be scrutinised; the process is carried out
independently using national statistics, and we know that it will be done
objectively and fairly, rather than someone who is not subject to as much
scrutiny being able to do it.
Of
course, our local newspaper, The Dorset
Echo, regularly reports the Ofsted results of schools as and when they
achieve them. They usually publish the
successes and the pictures of celebrating pupils, teachers, head teachers and
governors, rather than those schools that get satisfactory ratings, but
obviously they will occasionally run stories on those schools that are not
doing so well. I think that those
stories, along with what parents say to each other, are as informative of
parental choice as the tables.
Q355 <Annette Brooke:> I would still disagree that parents
have full information. For example,
someone told me recently that a certain selective school has 100% five A to C
grades at GCSE, and said, "Isn't that fantastic?" That is the perception, is it not, because of the way that league
tables are presented?
<Jim
Knight:> Again, I cannot be accountable for how league
tables are presented in every single newspaper. We publish the contextual value added as well as the raw
scores. We are now moving to
progression targets, so that we are looking at the proportion of schools that
progress children through to levels for each key stage. So we will be reporting a number of
different things. We are putting a
science and language into the indicator, so that all that data will be
available in the attainment and assessment tables. However, we cannot tell newspapers how to report the tables.
Q356 <Annette Brooke:> May I just dig in to the contextual
value added measure? We have had
varying evidence on this measure. I
think that there was one throwaway comment that it was really just a measure of
deprivation. There is also the issue
that not all parents will understand the significance of the measure. Is there more that you could do as a
Department to make the measure stack up better and be genuinely more
informative for parents?
<Jim
Knight:> I would not say that the measure is perfect,
and I will let Ralph give the technical answer on CVA in a moment. However, one of the reasons why I have been
pushing on the progression measure is that it is slightly easier for people to
get their head round, as to how every single pupil is progressing. So it is not a threshold but something that
applies across the board. I think that
that will help in respect of the concerns that you raise.
Q357 <Annette Brooke:> I would
like the answer to my question.
However, I would like to pick up on that particular point about
progress. An admirable school may have
a high percentage of children with special educational needs. Up to 40% of its
children may have special educational needs. It is quite likely that those
children will not be able to progress more than one level in the standard
assessment tests over a given period. If you use that information across the
whole school it will add even more distortion to the picture. <Jim
Knight:> I am not sure whether there will be any more
distortion than there is at the moment. It is a reasonable criticism. When we
introduce the foundation learning tier, which can accredit progression and
learning below level 1 in national vocational qualification terms-it is very
confusing having national curriculum and NVQ levels-we may be able to look at
whether that can be reflected. At the moment, if you have a high proportion of
children with SEN, you will not do as well in the raw scores as those schools
with a lower proportion.
<Ralph
Tabberer:> The most important thing is to see the Ofsted
inspection as the top of the tree. For parents, who are your concern here, the
Ofsted inspection is probably the most rounded, richest and most comprehensive
assessment that they will get of a school's strengths and weaknesses. I would
always point parents to that assessment as the best thing to consult. When we
publish results, we try to ensure that the public can see raw results and that
they can look at comparators and benchmarks. We have had a lot of discussion
about which way to approach value added. In our consultations on that, we have
settled on contextualised value added as the most fair. In trying to publish
series of data, we are following the principle of being transparent about all
of the analyses so that parents can access the information that they understand
or the information that they want. I have to say that we get very few
complaints about the testing regime.
Q358 <Chairman:> That is
because they cannot understand a word of it. You have to understand it to be
able to complain about it. <Jim
Knight:> They could complain that they
cannot understand it.
Q359 <Chairman:> That
is true. Come on, look at your site. Get a group of parents to look at the site
and evaluate how much they understand the contextual value added score. It is
very difficult to understand. Why present it in that form? <Ralph
Tabberer:> Equally, why hold it back? What I
am saying is that we know that many parents consult Ofsted reports. We know
that in putting those Ofsted reports together, the school, in its
self-evaluation, and the inspectors will draw on all those analyses. There is
no reason for us to hold back those analyses. What we do is make them
transparent.
Q360 <Chairman:> You
are missing the point. We represent a broad swathe of population in our
constituencies and we want the information to be intelligible to people with
higher education, lesser education and very little education. We want them all
to be well informed. In the way that you present CVA scores, what you have set
up is a system that is only understandable to people with higher levels of
qualifications. That is unfair. <Jim
Knight:> I want to challenge that if I may. I do not
think that it is that difficult to understand that in CVA terms, 1,000 is the
norm. If you are above 1,000, you are adding value better than the norm. If you
are below 1,000, you are adding value lower than the norm. If that is all people
understand, then it is pretty straightforward.
Q361 <Mr. Chaytor:> Surely, the real point is that the
significance of the degree to which it is below 1,000 is unclear. What is the
Government's resistance to a simple banding system or a five-point scale, from excellent
to poor, to rate a school's value-added performance? Would that not be easier?
We use simple banding systems to describe most other public institutions. Yet
this value-added concept is a- <Jim
Knight:> A star system.
Q362 <Mr. Chaytor:> What parents want to know is to what degree
their school differs from what could reasonably be expected. As it presents at the moment, they just
cannot work that out.
<Jim
Knight:> The problem is that you would be lumping lots
of different judgments together. We would
have constant Select Committee inquiries into whether it was a fair way in
which to lump everything together.
Q363 <Mr. Chaytor:> That is what the Ofsted report is.
<Jim
Knight:> The Ofsted report gives a series of judgments
under a series of headings.
Q364 <Mr. Chaytor:> It comes together under one scale.
<Jim
Knight:> Yes, the Ofsted report is the thorough,
authoritative reflection on a school, whereas, finding ways to lump things
together using year-by-year assessment and attainment tables would make us
vulnerable to criticism and questions such as whether we left out soft skills,
science, foreign languages and so on.
There are many ways of making judgments about a school. You could say the same about a hospital to
some extent, but a hospital's performance is rated by inspection.
Q365 <Annette Brooke:> Can I ask you to have a look at how
informative CVA is for the vast majority of parents? The vast majority of parents do not really appreciate it and do
not take it on board. They still see,
in a selective system, that school X must be better than school Y, because it
has better figures according to raw results.
School Y might be doing fantastically well, but that is not the message
that comes out.
<Jim
Knight:> Annette, we would always look seriously at the
Committee's recommendations, and I shall look out for that one in
particular.
Q366 <Chairman:> It is not rocket science. Do a quick little test-anyone could do
it. Get the Institute of Education to
see how understandable it is. You would
not have to take a big sample-you could simply test how many people easily
understand it, as Ralph said, and sample by class and educational
background. You could do it in a
week.
<Jim
Knight:> I will reflect on the wishes of the
Committee.
Q367 <Annette Brooke:> Finally, even if the league tables
really work and if they convey something or other to parents, you often argue
that they are driving up standards.
What is the causal link between league tables and the driving up of standards? Do you genuinely have evidence for such a
statement?
<Jim
Knight:> We have discussed this afternoon the nature
of testing and the publication of results in tables, and their use in making
schools accountable. Part of the debate
is about whether the tests are high stakes.
Schools take how well they are doing in the tests really seriously,
which drives forward their literacy and numeracy priorities. Getting things right in English, maths and
science is a priority. There is
evidence that such sharp accountability has driven things forward in those
subjects.
<Annette Brooke:> I shall
leave the drilling on that question to my colleagues. You write to your 100 most improved schools, but are they most
improved on raw results, on CVA or on both?
<Chairman:> Let us move
on. Fiona, do you wish to ask a
question about the unintended consequences of high-stakes testing?
Q368 <Fiona Mactaggart:> Minister, you said earlier that no
individual pupil spends more than 2% of their time on taking tests. That might have been a mis-statement: Sue
Hackman told us that no one spent more than 0.2% of their time preparing for
tests, but David Bell said in writing that that meant taking tests. Do you have any evidence to show how long
pupils spend on revision and preparation for tests?
<Jim
Knight:> I do not have those statistics. Key Stage 2 tests take a total of five hours
and 35 minutes in one week in May, so the amount of teaching time taken away so
that pupils can sit the test is 0.2%.
For Key Stage 3, seven hours and 55 minutes, or 0.3%, is taken away. Those are the figures that Sue quoted. When I recalled that it was 2%, I should
have said 0.2%. However, I do not have
any exact statistics on the average amount of time anyone spends preparing for
the test, which would be hugely variable.
With some schools-and I think the ideal is that they would just
integrate it into their learning-there would be a certain amount of preparation
for taking a test, because it is just good practice to instil in young people
the belief that when they are about to take an examination they should prepare
for it. I prepared a little bit for this hearing, believe it or not. However, I do not know exactly how the
figure might average out across the country.
Q369 <Fiona Mactaggart:> Would you be surprised to learn that
the Qualifications and Curriculum Authority did a survey of primary schools
that showed that at Key Stage 2, in the four months before the test, it was 10
hours a week on average per pupil? That
is nearly 50% of the teaching time available.
<Jim
Knight:> I have not seen that research. I do not know
whether it is something with which Ralph is familiar.
<Ralph
Tabberer:> I am
certainly familiar with the QCA research.
It goes back to what I said earlier about the problems that have
surfaced from time to time regarding the possible impact on the year 6
curriculum. That is why we are looking
at-and we always listen to questions raised by the profession and by
non-departmental public bodies-the impact of the current system.
Q370 <Fiona Mactaggart:> What effort is the Department making
on this? Of course public examinations require rehearsal and revision-I have no
doubt about that-but the key stage tests were not originally conceived as that
kind of test for pupils. If half the
time of a Key Stage 2 pupil is taken up with revision for the test, that is
time when they are not learning new things.
<Jim
Knight:> I shall let Ralph come in on this in a
minute, but I dispute that, and would be very surprised if there are young
people who are just sat there revising when they know it all. If they are spending some time making sure
that when they complete year 6 they have the necessary maths, English and
science skills to be able to prosper when they get into secondary and move into
Key Stage 3, I do not have a problem with that. I do not have a problem with their being taught the things they
need to be able to pass the test, even if that means more catch-up classes, or
even if it means a shift in the amount of time being spent on the priority
subjects in their final year in primary.
<Ralph
Tabberer:> I am sorry if I nearly interrupted my
Minister, but it was only to be just as quick in disputing the premise that
revision is wasted time. It is
enormously important that young people are as prepared as possible for level 4,
particularly in English and maths, so that they are ready to access the
secondary curriculum. I am not
concerned if that is a prime interest for teachers teaching in year 6. We know there is a tremendously strong
relationship between pupils who attain level 4 in English and maths at that age
and their results at GCSE, and if we can give more young people access to that
level we are sure that they make a stronger transition into secondary schools
and are more likely to succeed. The
threshold is not there by accident, and I do not think we should treat revision
as necessarily a negative.
Q371 <Fiona Mactaggart:> I do not do so, but I am concerned
about standards. I have only one
concern, and it is about standards.
There is a balance between testing and standards, and testing is the way
in which you assess whether a child has achieved a standard. You might want to anchor a child into that
standard, but I am concerned that our focus on testing may-I am not saying that
it does, but there is a risk that it will-interfere with the drive for
standards. In a way it can become a
substitute for standards. For example,
Mr. Tabberer, your experience is as a teacher educator and the head of teacher
education. Would you say that
well-implemented teaching for learning, and that kind of in-teaching
assessment, has the capacity to drive up standards faster than the constant
testing of children? <Ralph
Tabberer:> Yes, I believe that assessment for learning
is an immensely important and continuous part of the teaching and learning
process. I also believe in occasional
external assessments to lift performance and give young people the opportunity
to perform at a higher level. Both have
a beneficial effect, and I have never said that one is more important than the
other-both are of value.
The
helpful thing in your distinction is that we do not want to see children being
drilled so that they can just repeat low-level processes accurately and get
marks for that-we are all clear that we do not want that. This is where I turn to the professionalism
of teachers, and when I talk to them, I do not hear that they are engaged in
that process-they are trying not to drill, but to prepare pupils so that they
can do their best in these tests. We have
a good balance. The evidence is saying
that if there is anywhere in our overall system where we need to invest in
assessment, it is in more assessment for learning, so you are right.
<Jim
Knight:> Which we are doing, with £150 million over
the next three years.
Q372 <Fiona Mactaggart:> So far, our efforts to implement it
across the board have not been as good as they should have been. Is that not the case?
<Jim
Knight:> We can always do better.
Q373 <Fiona Mactaggart:> Let us briefly take this back to the
point that Annette raised about the results of tests being used for other
purposes. I do not think that most
teachers drill pupils, but some do, and my anxiety is that that is partly
because we use the tests in the ways that we do. There is a risk-I would like your views on this-that drilling
might become more, not less, prevalent in the single level tests, although
there is a possibility that it might become less prevalent. However, I would like your opinion on the
fact that the research shows that about 30% of pupils at Key Stage 2 are misallocated
levels just because that happens. About
15% are allocated a level below that which they should have and about 15% are
allocated a level above-I might have got the margins slightly wrong, but that
is how I read the research.
One
anxiety about the single level tests-this is great for the individual pupil-is
that once you are through the gateway, you are safe. My anxiety is about pupils getting unreliable success results,
although that would be good for those pupils and might motivate them, with good
consequences. However, because we use
the tests to measure schools, there is a real incentive for teachers to push
children through the gateway. I have
not seen any evidence that the Department has addressed the damaging
consequences of what the evidence suggests is really going on.
<Jim
Knight:> We are obviously in the early days
of the pilot on the single level tests, and we have had only the December round
of testing. It is worth noting that we
made the decisions about changing the measures on the tests in November, before
the December tests were even taken, let alone before the results were known-I
say that for the benefit of any media representatives listening. We will see what happens in those pilots,
but one thing that was a bit weird about the patterns from the December tests
was the number of entrants who were put in at the wrong level. As things bed in, teachers will understand
the importance of the assessment of their children's learning and the fact that
these are pass/fail tests. There is a
big difference from the SATs as they stand, where the examiner makes the
assessment of which level the pupil is at.
In this case, the teacher makes the assessment of which level the pupil
is at, then the examiner checks whether the teacher is right. That changes the terms of trade quite
significantly. It puts more emphasis on
the teacher's own assessment. That is
why assessment for learning is built into the "Making Good Progress" pilots,
alongside one-to-one tuition, progression targets and incentive payments.
Q374 <Fiona Mactaggart:> I am trying to refresh my memory about
your letter to the Committee. As we saw
it only this morning, I might be wrong, but one of the things that you were
wondering about-this was on the third page of your letter-was whether part of
the reason for the wrong entry might have been that pupils were not being
focused and prepared in the way that they have been. I am quite interested in this issue. Let me explain why.
I
am an MP for an area that has the 11-plus.
When it was originally conceived, the 11-plus was said to be an
assessment of where pupils were at and not a consequence of drilling and so
on. Of course, ambitious parents drill
their children extensively. Of course
they do-they pay for tutors if they can afford it-because it makes a huge
difference to pupils' performance, as a result of which it is not the kind of
assessment that it was in the days when I did the 11-plus. I went into school one morning and the exam
was stuck in front of me; I did not know that that was going to happen that
day.
Today,
it is quite a different experience. I
think it would be a good thing if we could, in the single level tests and in Key
Stage 2 tests, make that the norm. It
would create a blip in results in the short term, because of the lack of
preparation, but it might tell us better truth about what pupils know, and mean
that teachers could focus on getting the pupils' understanding strong rather
than on getting pupils through examinations.
<Jim
Knight:> I think I am with you on this. However, I would sound a note of caution, in
that I do not want to take the pressure off.
<Fiona Mactaggart:>
Neither do I.
<Jim
Knight:> I know you do not. We are explicitly designing this to drive progress for every
single pupil, regardless of their starting point, because of the concern that
some people have expressed about the current situation, in which, let us say,
at the end of Key Stage 2 there is too much focus in some schools on people on
the margins of a level 4 and not on the rest, because that is where the measure
is. This is about every single child
making two levels of progress within each key stage and being tested when they
are ready, so the testing is the culmination of learning when the child is
ready to take the test, rather than everyone being pushed and drilled for an
arbitrary date in June or May or whenever it is. That feels like a good model to explore in the pilot.
In
relation to ambitious parents, I think the answer is to try to get every parent
as ambitious as the next for their child.
I would love an aspect of this to be parents asking at parents evenings
or in e-mails to teachers when their child will be ready to take the next test
in reading, writing or numeracy, so that there is a bit of a push in the same
way as there is with the music grading exam.
In the days when my daughter was young enough to take her cello exams,
when I saw the cello teacher I would ask when she would be ready for grade 4,
grade 5 or whatever. Just that little
bit of push in the system for each individual is not a bad thing.
<Ralph
Tabberer:> I agree entirely that whenever you pilot or
look at an alternative approach to assessment, the thing you have to do, as you
are suggesting, is look at the changes that causes in teacher behaviour, pupil
behaviour and parent behaviour. That is
precisely why we want to go through this pilot exercise. When you are looking at a pilot and weighing
it against the strengths and weaknesses of an existing system, you are asking
yourselves questions about whether it might cause less time in year 6 to be
devoted to revision, less time to be devoted to drilling and so on.
I
think we have to go through a couple of these rounds of the progression tests
and look quite closely at whether those are the problems, or whether we get a
new set of behaviours. At the moment we
are very open to those possibilities.
You can always set out a theory that a new assessment form could cause
this or that, and there are plenty of people out there with experience of assessment-and
some without-who will proselytise for different theories about what might
happen.
We
clearly want, with the progression test, to change some behaviours, say earlier
in Key Stage 2-to get some of the questions that are being asked about the progress
of all pupils asked earlier in the key stage than may be the case now. If we can make it more obvious that children
are perhaps progressing more slowly than we wish in year 3 and year 4, that
could be a good effect, but if we misjudge the accountability to the point
where everybody feels that they have got to drill in order to prove worth, then
we have gone too far. These are very
subtle things, but these are the crucial questions that we have got to ask.
Q375 <Chairman:> I
have to say that your language worries me, because it is you, and then the
Minister, who have kept talking about drilling-drilling, driving and
pressure. That language seems to me all
wrong in terms of the educational process that Fiona is probing on, in the
sense that you set up a system that has an enormous amount of testing in it;
you incentivise teachers to achieve on that business of testing and being
successful; and you seem not to be able to step outside that and say, "But what
does this achieve? What do independent
assessors, researchers at whatever institution, tell us?"
It
is an internal world; you seem to glorify testing and believe that it is going
to lead to a better quality of education for the children. It worries me tremendously when you talk
about it, and when you brush aside the fact that it could be 50% of the time in
school spent on trying to get the kids to achieve on the test. In the school where 30% or 40% of children
have special educational needs, and there a lot of poor kids, I have a feeling
that the percentage in those schools, particularly perhaps the ones that might
just make it with extra drilling, would be much more intense than 50%. It worries me that this is not the world
that I see when I visit schools-the world that you describe. They seem to be lost in this drilling,
driving and pressure.
<Jim
Knight:> Fiona started it with the drilling.
<Chairman:> No; you guys
came up with drilling.
<Ralph
Tabberer:> I have clearly given the wrong impression if
you think that we just, in your words, drive this as an internal system. Far from it. We do not just sit in Sanctuary buildings and imagine what we
think will be an effective system. We
do a lot of work looking at what the research tells us is working, and what is
not working. I would say there is an
immensely powerful trail of evidence that our approach to assessing has been
very effective over 20 years.
Q376 <Chairman:> So there is no evidence that the marginal
student gets even more pressure to get to that next level? Is there any evidence that those people who
teachers think are never going to make the standard are just left in the
wilderness?
<Ralph
Tabberer:> I accept that there are real questions about
where the onus of attention goes with any external tested system at the end of
a key stage. Again, that is why within
the Department we have been so interested to move towards progression as a new
model, looking at the progress that pupils make across the key stage. It is just as important to us that a child
who is working at level 1 gets to a level 3 as that a child who is working at
level 3 gets to a level 5, through the key stage. Far from being locked into just one approach we are much more
concerned with the overall range.
Where
I perhaps disagree with you, I am not sure, is that I believe that measurement
helps us to understand where a child is and gives us a sense of where they are
going. It helps to give the parent and
the child that sense, and it helps to give the school that sense. That is itself worth having. I think that helps
to open up the secret garden.
Q377 <Chairman:> Would not a qualified and perceptive teacher
give you that?
<Ralph
Tabberer:> Yes, but when you are trying to use a system
also for public accountability, as we are doing, you are looking for a
manageable solution. I believe that our
system of external testing creates the best possible model. Indeed, in terms of the international
investigation of different models, we get a trail of people coming here to look
at the way our system works, to look at the power of the data that we have
available and to look at the willingness with which we have been able to
confront areas of failure-areas of failure for the disadvantaged as well as the
advantaged. Indeed, if you look at the
recent comments from colleagues in OECD they point to our system as having
probably more of the components of a modern and effective education system than
any other they know.
<Chairman:> You have been
very patient, Annette, in getting to your questions.
Q378 <Annette Brooke:> I am still rather concerned about
children who would only be able to progress one level being ignored. In many cases, it is probably a cause for
great celebration that those children do progress that one level. The teacher will be trusted to celebrate
that, I guess. Why, on perhaps the
easier aspect of going up two levels for children of a certain ability, are
there financial incentives? It seems to
me rather topsy-turvy, and we may be in danger of not putting enough incentives
in the system for children who, however high the quality of teaching, will find
it much more difficult to progress.
<Jim
Knight:> We have yet to announce exactly how the
financial incentives within the pilot will work. My inclination is to focus around those young people who have not
been making the sort of pace of progress they should, so rather than just
paying money for those who you would normally expect to do well, you focus the
incentive around those who have not been doing as well as they should, who have
not been making the pace of progress that you want and being able to reward
those schools-not teachers, but the schools themselves-if they manage to
achieve that. We will make some
announcements on that fairly shortly.
As
for those who are making only one level of progress during key stages,
obviously there are some with special educational needs where that might
apply. It is worth bearing in mind
that, as Ralph said, the new system will celebrate as much someone moving from 0 to 2, or 1 to 3, as it will someone moving
from 5 to 7. That is a significant step
forward in terms of a system that rewards improvement across the whole ability
range.
<Chairman:> We must move
on.
Q379 <Lynda Waltho:> Minister, thank you for your letter,
although for me it arrived a bit too close for comfort.
<Jim
Knight:> I have apologised to the Chairman.
Q380 <Lynda Waltho:> I did not know what you were saying to
us, but I have now had a chance to read it.
It answers some of the questions that I was forming.
Interestingly,
you refer to some unexpected patterns in the results. Indeed, Sue Hackman said the same in her letter to schools in
January. You go on to say what might
have caused those unusual patterns, but you do not say what the patterns
are. I wonder whether you would expand
on that? You touched on the subject
slightly with Fiona, and I wonder whether you would expand a little on what
those patterns might have been.
<Jim
Knight:> As I said at the outset, we will publish a
proper evaluation of the December and June tests in the autumn as it can be
analysed more fully than in the early stages of a pilot. We should bear in mind that it took four
years for the SATs to be piloted. All
of these new tests take some time, and you will inevitably have some teething
troubles. We do not publish as each of
the test results come out. We do not
publish them in a drip-drip fashion; we tend to do an annual publication of
test results. I do not think that we
should do anything particularly different for this, because it might skew
things and put undue pressure on those schools that are in the pilot
scheme.
As
I said in the letter, the most significant unusual outcome was variations
between Key Stage 2 and Key Stage 3 pupils taking the same test. So, let us say that they were taking a level
4 writing test. The Key Stage 2
students were doing significantly better when they were taking exactly the same
test as the Key Stage 3 students. Now,
that was a bit odd. We will have to
wait and see why that was the case. It
might just be that the sorts of scenarios that they would have been writing
about in that test were more engaging for younger children than for older
children; I do not know. Maybe there
are issues of motivation in Key Stage 3 that are different from Key Stage 2
around taking the test.
There
were some other patterns around the higher levels and the lower levels, and the
expectations were different between the two.
However, when we had a first look at the overall patterns that were
emerging, we just thought that there were enough oddities that, although they
are not out of keeping at all with early pilots, we should ask the National
Assessment Agency to run some checks and make sure that the marking was right
before we gave the results to the pupils and the schools, which we have now done.
Q381 <Lynda Waltho:> There is a perception that part of the
problem might have been a higher rate of failure, if you like.
<Jim
Knight:> No, it certainly was not about the results.
It was the patterns of results and the differences between different types of
students, particularly that difference between Key Stage 3 and Key Stage 2
students, that we were concerned about.
The decisions that we made in November were about how we pitched this
test so that the results are more comparable with the current SATs, in terms of
the level gradings. We made those
decisions before these tests were set or before we had the results. For those sections of the media that think
that we have changed the rules because of results, they are misunderstanding at
two levels: one, they are misunderstanding if they think that we are unhappy
with the overall level of results in these pilots; and two, they are
misunderstanding the sequence, because they are interpreting that we have made
these changes in response to results.
Q382 <Lynda Waltho:> If we could drill down on this issue,
basically what I want you to confirm is whether the passing rate was lower than
you expected it to be. I think that
that is cutting to the chase.
<Jim
Knight:> Again, it is more complicated than that. In some tests, the results were better than
expected and in some tests the results were worse than expected. So, it was not about the pass rate; it was
about the pattern.
<Ralph
Tabberer:> There were good results and there were some
weak results, but the anomalies were sufficient to make us appreciate that
there were some things that have changed within the tests. As we had set up these new tests, there was
a test effect, but we do not know what that effect is yet and we will not know
until we run another pilot. There are
some things related to teacher behaviours changing, including which children
they are putting in for the tests and at what stage. We do not know how much is down to that factor. There are also some questions about the
level that we are pitching the tests at.
However, it is impossible from this first pilot to separate out which
effect is pushing in which direction.
You
must also remember that these are self-selecting schools, so there is no sense
in which they are a national representative sample of the performance across
the country. So, we are having to find
our way through this process quite carefully.
We need another round of results before we know what is going on. We will have a chance to try another set of
tests and then we will be in a position to make the review of this process
available at the end of the year.
Q383 <Lynda Waltho:> If the problem is a higher rate of
failure, it might imply that there is perhaps a discrepancy between the tool of
assessment and the teacher judgment about whether a child has reached a
particular stage. If that is the case,
how could we resolve it?
<Ralph
Tabberer:> First, the only thing we can do is
speculate-we are not in a position to know and we cannot answer whether it is
down to a change in teacher behaviour.
If a child does less well in a test, say, you may deduce that that
reflects that the teacher has got the level wrong. However, we do not know whether teachers behave differently or
take a different approach to progression tests than they would to an external
test at the end of year 6. They might,
for example, be pitching for a child to show a level earlier than they normally
would in order to take part. However,
we will not know enough about how people behave until we review the situation.
<Jim
Knight:> Equally, a Key Stage 2 maths teacher, for
example, might be very familiar with levels 3, 4 and 5, because they deal with
those all the time. However, they might
be less familiar with levels 1 or 7, say.
Making the assessment in those very early days-in December, they were
only two or three months into the pilot-and making the right judgment on
whether pupils were ready to take some of the tests, might have been
difficult.
Q384 <Lynda Waltho:> The Government have put a lot into
single level testing and have stated that it will be rolled out nationally
subject to positive evidence from the pilot study. That shows quite a lot of confidence. Why are you backing single level tests so publicly before we have
sufficient evidence from the pilots? I
know that you are a confident man.
<Jim
Knight:> Theoretically, the progression pilot, single
level testing and testing when ready, accompanied by one-to-one tuition, is
compelling. It would be a positive
evolution from SATs, for reasons that we have discussed. Naturally, we want such a positive evolution
to work. If it does not work, we will
not do it.
<Lynda Waltho:> That was a
very confident answer.
<Chairman:> Douglas.
Q385 <Mr. Carswell:> I have four questions. The first is general and philosophical. There are lots of examples in society of
testing and qualifications being maintained without the oversight of a state
agency, such as university degrees, certain medical and legal qualifications,
and musical grades. I cannot remember
hearing a row about dumbing down grade 2 piano or an argument about whether the
Royal College of Surgeons had lowered a threshold. Therefore, why do we need to have a state agency to oversee
testing and assessment in schools? Does
the fact that the international baccalaureate has become more popular in
certain independent schools suggest that some sort of independent body, which
is totally separate from the state and which earns its living by setting
rigorous criteria, is necessary?
<Jim
Knight:> Yes, it is necessary, which is why we are
setting up an independent regulator that will be completely independent of
Government and directly accountable to Parliament.
Q386 <Mr. Carswell:> But it will not earn its living by
producing exams that people want to take-it will be funded by the taxpayer.
<Jim
Knight:> Yes, but the examinations are absolutely
crucial to the future of the country and to the future of children in this
country-marginally more so, I would argue, than grade 2 piano-so it is right to
have an independent regulator to oversee them in the public interest. However, we should move on from the QCA as
it is currently formed, which is to some extent conflicted, because it is both
develops and regulates qualifications.
Because it undertakes the development of qualifications, it has a vested
interest in their success, which is why we thought that it would be sensible to
split them. We will legislate in the autumn, but we will set up things in
shadow form later this year under current legislation. That means we will have that independence.
Q387 <Mr. Carswell:> If the QCA earns its fees by setting
competitive examinations in competition with other bodies, I have no doubt that
it will be setting good tests.
<Jim
Knight:> Would I not then appear before the Committee and
be asked about the over-marketisation of the education system? People would say
that valuable exams are not being properly regulated or set because there is
not enough of a market to make that particular speciality commercially viable.
Q388 <Mr. Carswell:> Architects and surgeons seem to get on
okay.
<Jim
Knight:> Yes, but there will always be a good market
for architects and surgeons, but there may not be for some other important
skills.
Q389 <Mr. Carswell:> Without wanting to move away from
asking the questions, I wonder whether you would deny the claims of those
people who suggest that over the past 15 to 20 years, under Governments of both
parties, standards have dropped. I will give you some specific instances. In 1989, one needed 48% to get a C grade in
GCSE maths. Some 11 years later, one needed only 18%. That is a fact. Successive Governments and Ministers have claimed
that exam results get better every year. However, in the real world, employers
and universities offer far more remedial courses to bring school leavers up to
standard than they did previously. International benchmarks show that UK pupils
have fallen behind. Does that suggest that, paradoxically, we have created an
education system that is drowning in central targets and assessments, but one
that lacks rigour? Central control is having the opposite effect to the one
intended.
<Jim
Knight:> You will be amazed to hear that I completely
disagree with you.
Q390 <Mr. Carswell:> Which fact do you dispute?
<Jim
Knight:> Ofsted, an independent inspectorate, inspects
the education system and gives us positive feedback on standards. We also have
the QCA, which is an independent regulator. Although we are strengthening the
independence of the regulation side, the QCA still remains relatively independent.
It regulates standards and ensures that the equivalence is there. It says
categorically that standards in our exams are as good as they have ever been.
Then we have National Statistics, which is also independent of the Government.
We commissioned a report led by Barry McGraw from the OECD, which is a
perfectly respectable international benchmarking organisation, and he gave
A-levels a completely clean bill of health.
What has changed is that we are moving to a less elitist system. We are
trying to drive up more and more people through the system to participate
post-16 and then to participate in higher education. Some people rue the loss
of elitism in the system and constantly charge it with dumbing down, and I
think that that is a shame.
<Chairman:> You did not
answer Douglas's point about the particular O-level in percentage terms.
<Mr. Carswell:> In 1989, one
needed 48% to get grade C GCSE maths. Some 11 years later, one needed 18%. Do
you agree or not?
<Ralph
Tabberer:> I do not agree. The problem with the
statistics is that you are comparing two tests of very different sorts.
<Mr. Carswell:> Indeed.
<Ralph
Tabberer:> The tests have a different curriculum, a
different lay-out and different groups taking them. We cannot take one
percentage and compare it with another and say that they are the same thing.
That is why we need to bring in some measure of professional judgment to look
at the tests operating different questions at different times. That is why in
1996 we asked the QCA to look at tests over time, and it decided that there
were no concerns about the consistency of standards. In 1999, we asked the Rose
review to look at the same thing, and it said that the system was very good
regarding consistency of standards. In 2003, as you rightly pointed out, we
went to international experts to look at the matter. We put those questions to
professional judgment, because it is so difficult to look at tests.
Q391 <Mr. Carswell:> Quangos and technocrats are doing the
assessment. The Minister has mentioned
three quangos, so technocrats are assessing performance.
<Jim
Knight:> Look at the key stage results. Look at Key Stage 2 English, where the
results have gone up from 63% to 80% since 1997. In maths, they have gone
up from 62% to 77%. In English at Key Stage
3, they have gone up from 57% in 1997 to 85% in 2007. There is consistent evidence of improvement in standards. It should not be a surprise, when you are
doubling the amount of money going into the system and increasing by 150,000
the number of adults working in classrooms, that things should steadily
improve. The notion that the
improvements are because things are dumbed down is utter nonsense. The international comparators are obviously
interesting and very important to us.
We started from a woeful state in the mid-90s, and we are now in a much
better state, but we are still not world class. We know that we have to do better to become world class, and we
said so explicitly in the Children's Plan.
We also know that if we do not carry on improving, we will be left
behind, because the international comparators also show that more countries are
entering the league tables and more are taking education seriously and doing
well. Globally, the competition is out
there, and we must respond.
Q392 <Mr. Carswell:> We may not agree on that, but there is
one area where I think that we agree, because I agree with what you said
earlier about education needing to respond to changing social and economic
circumstances. If the centre sets the
testing and assessment, it is surely claiming that it knows what is best, or
what will be best, and what needs to be judged. If you have central testing, will you not stifle the scope for
the education system to be dynamic and to innovate? It is a form of central planning.
<Jim
Knight:> We do not specify things for the end of level
4 examinations; we talk about equivalency in terms of higher-level GCSEs, so if
people want to take other, equivalent examinations, that is fine. The only things where we specify are SATs,
which, as we have discussed, are intended to provide a benchmark so that we can
measure pupil performance, school performance and national system
performance.
Q393 <Mr. Carswell:> I am anxious about the way in which
the SATs scoring system works. I was
reading a note earlier about the CVA system, which we touched on earlier. If testing is about giving parents a
yardstick that they can use to gauge the sort of education that their child is
getting, that is a form of accountability, so it needs to be pretty straightforward. Is there not a case for saying that the SATs
scoring system and the CVA assessment overcomplicate things by relativising the
score, for want of a better word? They
adjust the score by taking into account people's circumstances, and I have read
a note stating that the QCA takes into account particular characteristics of a
pupil. Is that not rather shocking,
because it could create an apartheid system in terms of expectations, depending
on your background? Should it not be
the same for everyone?
<Jim
Knight:> There is not an individual CVA for each
pupil, and I do not know what my child's CVA is-the CVA is aggregated across
the school. The measure was brought in
because there was concern that the initial value added measure was not
sufficiently contextualised, that some were ritually disadvantaged by it and
that we needed to bring in a measure to deal with that. On questions from Annette and others, we
have discussed whether it is sufficiently transparent to be intelligible
enough. I think that the SATs are
pretty straightforward. They involve
levels 1, 2, 3, 4, 5, 6 and 7-where are you at? That is straightforward enough.
Obviously, you then have 1a, b and c and the intricacies within
that.
Q394 <Mr. Carswell:> There is all the contextualising and
relativising that means that you cannot necessarily compare like with
like.
<Ralph
Tabberer:> There is no difference in the tests that
different children sit or in the impact of levels. You could perhaps suggest that the CVA is providing an analysis
that includes a dimension of context, but that is different. The tests are constant. With the analyses, we are ensuring that-over
the years, through consultations, people have encouraged us to do this-a
variety of data is available, so people can see how a child or school is doing
from different angles.
Q395 <Mr. Carswell:>
A final question: living in this emerging post-bureaucratic, internet
age, is it only a matter of time before a progressive school teams up with a
university or employer and decides to do its own thing, perhaps based on what
is suitable to its area and the local jobs market, by setting up its own
testing curriculum? Should we not be
looking to encourage and facilitate that, rather than having this 1950s
attitude of "We know best"?
<Jim
Knight:> We
are encouraging employers to do their own thing and to become accredited as
awarding bodies. We have seen the
beginnings of that with the Maccalaureate, the Flybe-level and Network
Rail. We have a system of accreditation
for qualifications-you might think it bureaucratic, Douglas. We are rationalising it to some extent, but
a system of accreditation will remain.
After accreditation, there will be a decision for maintained schools on
whether we would fund the qualification, and I do not see that as going
away. We will publish a qualifications
strategy later this year setting out our thinking for the next five years or
so. The independent sector might go
down that road. All sorts of
qualifications pop up every now and then in that area. Any other wisdom, Ralph?
<Ralph
Tabberer:> I go
back to the starting point. Before
1998, we had a system whereby schools could choose their own assessments. We introduced national assessment partly
because we did not feel that that system gave us consistent quality of
education across the system. I do not
know of any teacher or head teacher who would argue against the proposition
that education in our schools has got a lot better and more consistent since we
introduced national assessment. We have
put in place the safeguards on those assessments that you would expect the
public to look to in order to ensure that they are valid, reliable and
consistent over time.
<Jim
Knight:>
Obviously, with the diplomas it is a brave new world that has started
with employers and with asking the sector skills councils to begin the process
of designing the new qualifications.
That has taken place at a national level-it is not a local, bottom-up
thing, but a national bottom-up thing.
<Chairman:> We are in danger of squeezing out the last
few questions. I realise that this is a
long sitting, but I would like to cover the rest of the territory. Annette.
Q396 <Annette Brooke:>
We heard a view from a university vice-chancellor that it is possible
that pupils from independent schools will account for the majority of the new
A* grades at A-level. What is your view
on that?
<Jim
Knight:> I
tried to dig out some statistics on the numbers getting three A* grades, which
has been mentioned in the discussion-I am told, based on 2006 figures, that it
is just 1.2% of those taking A-levels.
To some extent that is on the margins, but we have done some research
into whether, judged on current performance, those getting A* grades would be
from independent or maintained schools, because we were concerned about
that. We believe in the importance of
adding stretch for those at the very top end of the ability range at A-level,
which is why we brought in the A* grade.
However, we were conscious of worries that it would be to the advantage
of independent-sector pupils over maintained-sector pupils. The evidence that has come back has shown
the situation to be pretty balanced.
Q397 <Chairman:> Balanced in what sense?
<Ralph
Tabberer:> We have looked at the data, following the
vice-chancellor's comment to the Committee that around 70% of those getting
three A*s would come from independent schools.
From our modelling, we anticipate that something like 1,180 independent
school pupils would get three or more A*s from a total of 3,053, so 70% is far
from the figure that we are talking about.
<Annette Brooke:> We have
to wait and see.
<Jim
Knight:> Yes.
<Annette Brooke:> That
sounded a very reasonable hypothesis.
Q398 <Chairman:> The examination boards also said that A*s
will make sure that fewer kids from less privileged backgrounds get into the
research-rich universities. That is
what they said. Although you are not
responsible for higher education, Minister, every time you put up another
barrier, bright kids from poorer backgrounds are put off from applying. You know that.
<Jim
Knight:> Yes.
We had some concerns about that, which is why we examined actual
achievement in A-level exams, and we were pleased to see that particular
result. The difficulty when we took the
decision, just to help the Committee, was that we could see the logic in
providing more stretch at the top end for A-level. The issue was not about universities being able to differentiate
between bright A-level pupils, because we can give individual marks on modules
to admissions tutors; it was genuinely about stretch. Should we prevent pupils, in whatever setting, from having that
stretch, just because of that one worry about independent and maintained-sector
pupils? We took a judgment that we had
a bigger responsibility than that, which is to stretch people in whatever
setting they are in. That is why we
took that decision, but we were reassured by the evidence that the situation is
not as bleak as Steve, whom we respect hugely as a member of the National
Council for Educational Excellence, might have at first thought.
Q399 <Annette Brooke:> I disagree with you on that
point. Many years ago, for my
generation, we had S-levels, so you had the opportunity for stretch. Why do we need this so tied in? Maybe sometimes we should put the clock
back.
<Jim
Knight:> We had S-levels-I was very pleased with my
grade 1 in geography S-level. We
replaced those subsequently with the Advanced Extension Award, in which I was
delighted by my daughter's result.
However, not many people took them, and they were not widely
accepted-they did not seem to be a great success. S-levels were extremely elitist-it was fine for me, in my
independent school, to get my grade 1.
I am sure that you did very well in whatever setting you were in.
<Annette Brooke:> I did not
go to an independent school.
<Jim
Knight:> They were introduced in the era of an elite
education system. Integrating something
into the A-level is a better way forward than introducing something marginal.
Q400 <Chairman:> Some people on the Committee have always
wondered why you cannot just have the raw data-the scores that you get in your
subjects-and leave it at that.
Universities can judge from that; they do not need the A*.
<Jim
Knight:> We can give that data to admissions
tutors. We have said that we will do
that, which is fine. The issue is about
stretch, as I have said; it is not about differentiation for universities.
Q401 <Mr. Carswell:> You have used the word "elitist"
several times in a disparaging sense.
Is not testing and assessment inherently elitist, because it
differentiates and breaks down people's results in a hierarchy of performance?
<Jim
Knight:> Not
necessarily, because a driving test is not that elitist.
Q402 <Mr. Carswell:> Between those who pass and those who
fail, of course it is. I failed my
driving test the first few times I took it-it was elitist, and for people who
drove, thank goodness that it was.
<Jim
Knight:> I have discussed an education system that was
designed to separate people in an elitist way.
We had a lot more selection, including Certificates of Secondary
Education, General Certificates of Education and the rest. A few went to university, and the rest went
into unskilled or skilled occupations, of which-this is no longer the
case-there were plenty. We cannot
afford to have a level of elitism that is culturally built in. Yes, we need to differentiate, but not in a
way that makes judgments.
<Mr. Carswell:> By definition,
it is going to make judgments.
Q403 <Lynda Waltho:> I want to wave a flag for the poor old
diploma at this point.
<Jim
Knight:> There is nothing poor or old about
the diploma.
Q404 <Lynda Waltho:> No, I do not think that it is poor,
and I want to make sure that it is not ignored. The OCR has stated that in its experience new qualifications take
at least 10 years to become accepted and to take root-I am not sure whether you
will be pleased to hear that. It has
also indicated that in seeking parity with GCSE and GCE, the main parts of
diplomas are increasingly adopting models and grading structures that mirror
GCSE and GCE. What assurances can you
give us, Minister, that the proposed format of the new diploma, which I am
positive about, will be given time to work and will not be subject to undue
interference?
<Jim
Knight:> As you know, the first teaching
will start this September, with the entitlement in 2013 to all 14 of them-we
will decide when the entitlement to the last three is introduced. That is a fair lead-in time-it is not the
full OCR 10 years, but it is fair. The
fundamental design of the diplomas will not change. We are not going to move away from generic learning, which
involves things such as functional skills, personal learning and thinking
skills. The voice of the sector skills
councils, where we started the process, will obviously be heard very loud in
terms of what learning is required for someone to do well in a sector. On additional specialist learning, which is
the area of greatest flexibility, there may well be some things that feel very
familiar in terms of GCSE, A-level, BTEC and the other qualifications that are
part of the landscape at the moment.
The additional specialist learning element and the work experience
element may well look familiar.
We
have said that no individual school or college will be able to live with those
individual diplomas on their own. We
will have much stronger employer engagement and a style of teaching and
learning that is related to the workplace.
In assessment terms, some of that may be similar to applied GCSEs,
applied A-levels and some of the BTECs-it will be a very distinctive offer-but
we will always look for equivalence. We
should not shy away from equivalence with GCSEs and A-levels. At the advanced level, we were pleased about
the diploma being worth the equivalent of three and a half A-levels.
Q405 <Lynda Waltho:> Just one more point. The supplementary memorandum from the
Department states that the Government will consider the future of A-levels and
GCSEs "in the light of the evidence".
It is clear that both of those qualifications are here to stay. Has the Department's position on the
long-term future of GCSEs and A-levels changed?
<Jim
Knight:> No.
The announcement that we made in October, when we announced the three
additional diplomas, was that we would postpone the A-level review until 2013.
In the meantime, we will see what the evidence is in terms of the experience of
learners, parents, schools, colleges, universities and employers around the
qualification landscape. We will continue
to invest and reform A-levels and GCSEs in the meantime. We will not let them wither on the vine-far
from it. Obviously, we will be putting
a lot of energy into making diplomas a success, but not at the expense of
GCSEs, A-levels and, indeed, apprenticeships, which we will be expanding as
well. We want to be able to assess them
all to see whether they are strong offers for all stakeholders. We will have a review in 2013, and we have
no preconceptions about how that will turn out.
Q406 <Mr. Chaytor:> Minister, may I ask about the splitting up of
the QCA? Will the positions of chief
executive at the two new agencies be publicly advertised?
<Jim
Knight:> We would have to advertise under the normal
Nolan rules.
<Ralph
Tabberer:> With the new regulator, the position would be
open, but with the existing QCA in its new form, my understanding is that Ken
Boston will be staying on as the chief executive until the end of his term.
Q407 <Mr. Chaytor:> How will you ensure that the development
agency and the regulator are more independent of the Department than the QCA
has been in the past? Earlier, you
described the QCA as the "independent regulator", and then qualified that by
saying, "Well, relatively independent."
What will be put in place that will make it different?
<Jim
Knight:> The regulator will be a non-ministerial
department, like Ofsted, accountable to Parliament rather than through
Ministers. The new chief regulator, the
chair of the organisation, will be a Crown appointment in the same way as Christine
at Ofsted. In that respect, it will
clearly be more independent than the QCA as a non-departmental public body that
is accountable through us and through our budget lines, subject to a remit
letter. The position of the development
body will be very similar to the QCA's current position: it will not be any
closer; it will not be any further away; and it will still perform the
development role that the QCA currently performs.
Q408 <Chairman:> Would you like the Committee to help you with
appointments and screening the eligible candidates?
<Jim Knight:>
Goodness me, it is very difficult for me to refuse any offers of help from the
Committee, but I would have to see the form back in the Department.
<Ralph
Tabberer:> We would have to consult civil service
commissioners, who guide us on this process.
<Jim
Knight:> There you go; that is the official advice.
Q409 <Mr. Chaytor:> So the new development agency will
essentially be the QCA reinvented? It
will not be a non-departmental public body?
<Jim
Knight:> It will be-
<Ralph
Tabberer:> It will remain-
<Jim
Knight:> Yes.
<Ralph
Tabberer:> It will remain a non-departmental public body
accountable to our Ministers.
Q410 <Mr. Chaytor:> Not accountable to Parliament?
<Ralph
Tabberer:> Well, it will be, but through our Ministers,
whereas the independent regulator will not have a route through our
Ministers. That is the distinction.
Q411 <Mr. Chaytor:> What will the route be for the independent
regulator?
<Jim
Knight:> As Ofsted is now, so not through Ministers
but directly to Parliament.
<Chairman:> Through this
Committee.
Q412 <Mr. Chaytor:> Had the development agency been in existence
12 months ago, would a policy such as the introduction of single level tests
have been taken up only after advice from the development agency? Had that been the case, would it have been
likely to have been introduced over a longer period? The point that I am trying to get at is this. There was an urgency about the introduction
of the single level test. Your letter
to the Committee mentions that one of the problems was the lack of pre-testing. Is this not almost like the introduction of
curriculum 2000, when the lack of pre-testing was one of the reasons for the
difficulties?
<Ralph
Tabberer:> The pilot for the single level test has been
agreed and worked on with the QCA from the beginning. The very big difference between curriculum 2000 and the single
level test is the fact that it is a pilot.
We are making sure that it acts as a pilot. We test it out so that we understand what is going on. We are doing that with the QCA. There is nothing in the timing of any of
these decisions that will have changed the way that we approach that or the
advice that we would have given.
Q413 <Mr. Chaytor:> May I ask about the primary review? Could you say a word about the time scale
for that, and will it be under the aegis of the new Development and Training
Agency?
<Jim
Knight:> Jim Rose is leading that for us. Most of the support that he gets
logistically, in secretariat and personnel terms, is from the QCA. They are doing that together. Jim is working from within the Department,
but with the QCA, in that review. We
expect some interim advice from him in a few months, and we are looking for him
to conclude by the end of the year.
<Chairman:> Fiona has a
very quick question on creativity. She
has a pressing problem.
Q414 <Fiona Mactaggart:> I am sorry about that; I said to the
Chairman that I would have to leave and that I would not be able to ask you
about your welcome announcement on the cultural offer and on ways to assess
creativity. Clearly, no one really
knows how that needs to be done, and I wonder whether you would share with the
Committee your early thoughts on how it is to be done-and continue to share
with the Committee what you are going to do?
<Jim
Knight:> To continue to share-
<Chairman:> You had better
make it quick, Minister, or Fiona will miss her deadline.
<Jim
Knight:> It is obviously something that we would be
happy to do. We announced last week
that we would pilot how this works. We
know that there are a number of schools where this already happens. We know that there is a considerable amount
of culture already within the curriculum in terms of art, music, dance and
drama. We know that a number of schools
already do trips to access all of that, so what we need to pilot is how to
extend that and use the very positive experience of greater partnerships,
informed by the Committee's excellent report, and integrate it with the sport
five hours, some of which, as with culture, would be out of school.
Q415 <Fiona Mactaggart:> What about assessing it?
<Jim
Knight:> In terms of assessing how well it is working
and the results, I turn to Ralph.
<Ralph
Tabberer:> We would be very happy to work with the
Committee on the approach we take. It
is new ground for us, and it is important that we get it right. However, we are not in a position where we
have a working model up and running, so given your interest we would enjoy
looking at it with you.
<Chairman:> Honour is
satisfied; thank you, Fiona. David, I
am sorry about that.
Q416 <Mr. Chaytor:> I return to the question of the independence
of the new agency. What interests me is
where it is specified, where the separate functions are to be described, and
how the Government are going to guarantee that independence and ensure that, at
the end of the day, Ministers' influence will not determine the recommendations
and the advice of the agency.
<Jim
Knight:> We will need to legislate to achieve some of
this. We can set up some interim
arrangements in shadow form, effectively trying to put a Chinese wall through
the QCA. Ultimately, much of it will
therefore be set out in legislation.
<Ralph
Tabberer:> I would add that the document "Confidence in
Standards" is out for consultation and it sets out the detailed proposals for
the bodies. That will allow you and us
an opportunity to look at the careful balances that need to be achieved in
order to get the regulator and the agency right. The particular opportunity that it gives us is to make what were
robust internal procedures within the QCA the subject of work between the two,
so one takes responsibility for development and the other holds them to
account. It will now be a much more
transparent process, which should allow the public to have more confidence in
it.
Q417 <Mr. Chaytor:> The new regulator will have responsibility
for regulating the quality and reliability of qualifications and assessments,
but in terms of the policy surrounding assessment, will he be required to give
advice on that or will it come under the aegis of the agency? The specific issue that I put to you is
perhaps the question with which we started our discussions this afternoon, that
of the multiple uses of assessment. Who
will be advising the Government about whether it makes sense to use pupil
assessment as a means of holding schools accountable and as a means of
informing parental preference? Will
that be the role of the regulator or the development agency, or will that stay
within the Department?
<Jim
Knight:> In simple terms, Ministers and the Department
will still decide on policy. The new development agency will develop that
policy into practice, as appropriate and according to its remit, and the
regulator will assess whether or not standards have been maintained as a
result.
It
will be quite important to respect the independence of the regulator. If we started to involve the regulator in
the policy development too much, we would be back to where we are with the
current QCA, in terms of the confliction that I talked about, if confliction is
a word, between its development role and its regulatory role. Therefore, we would be cautious about
that. However, we would certainly be
asking the new development agency for advice on policy development and then
deciding what to do with that advice.
Q418 <Mr. Chaytor:> If the new development agency gave Ministers
advice that it no longer made sense for pupil assessment to serve so many
different purposes, would you be likely to accept that advice?
<Jim
Knight:> Given that the QCA currently performs that
development role, the QCA could give us that advice and we would then decide
accordingly. We have not received that
advice. I have been fairly robust in my
position, so I think that you can predict how I might respond if I received
such advice.
Q419 <Mr. Chaytor:> Will the legislation be specific in giving
the development agency responsibility for advising the Government on the uses
of assessment and not just the specific forms of assessment?
<Jim
Knight:> My familiarity with "Confidence in Standards"
is not sufficient to be able to answer that question, but I can obviously let
you know if Ralph cannot.
<Ralph
Tabberer:> In terms of taking advice, the closest
relationship will be between the QCA's development agency and the Department in
putting together advice for the Ministers.
I think that it would be fair to say that we will also seek the views of
the regulator on the moves and their response will be public. It would not make sense just to act and have
the regulator completely out of the process.
However, we have to guard this process in such a way, as Jim indicated,
that we do not cause conflict for the regulator in so doing. Again, I would point to the current
consultation as an opportunity for us to tease out these issues. They are very
good questions.
Q420 <Mr. Chaytor:> If the advice from the regulator conflicts
with the advice from the development agency, who mediates? Will the legislation include some kind of
mechanism for that mediation?
<Ralph
Tabberer:> Ultimately, the decision in this area falls
to Ministers. The advice of the
regulator would be public and that advice would be subject to normal
parliamentary scrutiny.
<Chairman:> David had to leave to attend a
Standing Committee; he did not mean any discourtesy. We will have very quick questions from Douglas and Annette, and
then we are finished for the day.
Q421 <Mr. Carswell:> Am I right, Minister, in thinking that
you said that the regulator would be accountable directly to Parliament and not
through Ministers?
<Jim
Knight:> Yes.
Q422 <Mr. Carswell:> But at the same time, you are saying
that the head of the regulatory body would be appointed by the Civil Service
Commission.
<Ralph
Tabberer:> The Civil Service Commission gives its
recommendations on the process to be followed.
Q423 <Mr. Carswell:> Given that Gordon Brown made the
excellent recommendation in his first week as Prime Minister that he was going
to reform Crown prerogative and allow the legislature to make key appointments
that hitherto have been the preserve of Sir Humphrey, would there not be a case
for saying that the Children, Schools and Families Committee should conduct
confirmation hearings for the person appointed to that regulatory role? Would that not be an excellent idea? It would ensure genuine accountability to
Parliament and it would ensure that Mr. Brown's excellent idea was realised.
<Jim
Knight:> That is a decision above my pay grade.
Q424 <Mr. Carswell:> But you would support it in principle?
<Jim
Knight:> Even my view is above my pay grade.
Q425 <Mr. Carswell:> You do not have a view? You do not have a view on Gordon Brown's
White Paper about the governance of Britain?
<Jim
Knight:> I think that you understand what I am saying.
Q426 <Mr. Carswell:> No, I do not understand. You do not have a view? Does Gordon know?
<Jim
Knight:> I do not have a view that I am going to share
with you now.
<Ralph
Tabberer:> I was taught in my briefing not to answer
rhetorical questions.
Q427 <Annette Brooke:> As I recollect, when we talked to the
examination bodies, they did not really see the need for a development agency,
because they could take on its work.
They could give you advice for free-they would not need two chief
executives. Have you considered that?
<Jim
Knight:> Yes, it has passed through the mind, but not
for long. We are making the change
because of the perceived conflict of interest in the QCA between its regulation
and its development functions.
Replacing that conflict of interest with a different conflict of
interest, by giving the development of qualifications to the same people who
make money out of them, did not seem sensible.
Q428 <Annette Brooke:> I might not applaud a competitive
model, but I thought that you might.
<Jim
Knight:> There is a very competitive qualifications
market out there, which responds to the qualifications that are designed in
structural and policy form by the QCA.
We do not have a problem with competition, but I do not want the
conflict that your interesting notion would produce.
Q429 <Chairman:> One very last thing, Minister. When did you last consult teachers directly
on how they feel about the quality of the testing and assessment system? I do not mean through the unions, I mean
through either the General Teaching Council or, more importantly, by the direct
polling of teachers about their experiences and happiness, and what suggestions
they would make to improve the system.
When was the last time you did that?
<Jim
Knight:> You said "not through unions", but I
frequently talk to teachers' representatives about the matter. I do not know when we last carried out any
kind of proper research or survey of the work force on the issue, but we would
have to carry out parallel research with parents, because we have a system that
is designed for the consumers of the product as well as the producers.
Q430 <Chairman:> That ducks the question. When was the last time you consulted? If you have not done it recently, why do you
not do so now?
<Jim
Knight:> I tried not to duck it by saying that I did
not know.
Q431 <Chairman:> Is it a good idea? Could you do even a sample survey?
<Jim
Knight:> It might be a good idea to do both. Obviously, we carry out various types of
research, but we have a budget on which there are many demands. As always, I will listen carefully.
Q432 <Chairman:> This is a key part of Government policy. Surely the people who deliver the policy up
and down the land should be consulted on how they feel about the regulatory and
testing and assessment frameworks.
Surely it is key to know what they think.
<Jim
Knight:> It is key, which is why I have frequent
discussions with teachers' representatives about it.
Q433 <Chairman:> We all know about vested interests. We set up the General Teaching Council to
cut away from that, but you evaded talking about it.
<Jim
Knight:> I meet with the GTC and hold discussions with
it-I shall speak at its parliamentary reception fairly soon, and I look forward
to seeing members of the Committee there.
Q434 <Chairman:> Ralph, you were in charge of teacher
training. Do you think that it is
important to keep in touch? As we know,
one of the key elements in delivering quality education is a highly motivated
and highly trained work force, so should it be consulted on the very important
issue of testing and assessment?
<Ralph
Tabberer:> I am sure that it should be consulted-your
argument is very clear. We are not
short of teachers' views on the matter, whether they are presented directly by
representatives, at school visits, or through the GTC. We are shorter on responses from parents,
and we might take a look at the range of their views. It is enormously important to look at the public information on
what is happening in the school system.
I still believe-you have heard me defend it today-that our system is
particularly transparent because of what we can show parents regarding what
works and what does not work in our schools and in the system at large. We should not give that up too quickly.
Q435 <Chairman:> It is not that anyone on the Committee
particularly disagrees with that, Ralph, but I am calling on someone to find those
things out scientifically. I am not
simply talking about chatting to the National Union of Teachers or the
NASUWT. Their representatives failed to
appear before the Committee to talk about this important subject, which is an
indication of how important they consider testing and assessment. You should find out scientifically, by going
directly to teachers and evaluating their opinion. We would be happy if you spoke to parents at the same time, but
you should get a clear resonance of what is going on out there.
<Jim
Knight:> In the same way that I heard the message that
we might want to think about research on CVA and what information parents find
useful, I hear your message that we could usefully do some proper, quantitative
research with the teaching work force on testing and the other things that we
have talked about today, and I shall certainly take that back to the
Department.
<Chairman:> Thank you very
much. This has been a long
session. We have learned a lot, and we
have enjoyed it-I hope you did, too.