Examination of Witnesses (Questions 380
- 399)
MONDAY 18 FEBRUARY 2008
JIM KNIGHT
MP AND RALPH
TABBERER
Q380 Lynda Waltho: I did not know
what you were saying to us, but I have now had a chance to read
it. It answers some of the questions that I was forming.
Interestingly, you refer to some unexpected
patterns in the results. Indeed, Sue Hackman said the same in
her letter to schools in January. You go on to say what might
have caused those unusual patterns, but you do not say what the
patterns are. I wonder whether you would expand on that? You touched
on the subject slightly with Fiona, and I wonder whether you would
expand a little on what those patterns might have been.
Jim Knight: As I said at the outset,
we will publish a proper evaluation of the December and June tests
in the autumn as it can be analysed more fully than in the early
stages of a pilot. We should bear in mind that it took four years
for the SATs to be piloted. All of these new tests take some time,
and you will inevitably have some teething troubles. We do not
publish as each of the test results come out. We do not publish
them in a drip-drip fashion; we tend to do an annual publication
of test results. I do not think that we should do anything particularly
different for this, because it might skew things and put undue
pressure on those schools that are in the pilot scheme. As I said
in the letter, the most significant unusual outcome was variations
between Key Stage 2 and Key Stage 3 pupils taking the same test.
So, let us say that they were taking a Level 4 writing test. The
Key Stage 2 students were doing significantly better when they
were taking exactly the same test as the Key Stage 3 students.
Now, that was a bit odd. We will have to wait and see why that
was the case. It might just be that the sorts of scenarios that
they would have been writing about in that test were more engaging
for younger children than for older children; I do not know. Maybe
there are issues of motivation in Key Stage 3 that are different
from Key Stage 2 around taking the test. There were some other
patterns around the higher levels and the lower levels, and the
expectations were different between the two. However, when we
had a first look at the overall patterns that were emerging, we
just thought that there were enough oddities that, although they
are not out of keeping at all with early pilots, we should ask
the National Assessment Agency to run some checks and make sure
that the marking was right before we gave the results to the pupils
and the schools, which we have now done.
Q381 Lynda Waltho: There is a perception
that part of the problem might have been a higher rate of failure,
if you like.
Jim Knight: No, it certainly was
not about the results. It was the patterns of results and the
differences between different types of students, particularly
that difference between Key Stage 3 and Key Stage 2 students,
that we were concerned about. The decisions that we made in November
were about how we pitched this test so that the results are more
comparable with the current SATs, in terms of the level gradings.
We made those decisions before these tests were set or before
we had the results. For those sections of the media that think
that we have changed the rules because of results, they are misunderstanding
at two levels: one, they are misunderstanding if they think that
we are unhappy with the overall level of results in these pilots;
and two, they are misunderstanding the sequence, because they
are interpreting that we have made these changes in response to
results.
Q382 Lynda Waltho: If we could drill
down on this issue, basically what I want you to confirm is whether
the passing rate was lower than you expected it to be. I think
that that is cutting to the chase.
Jim Knight: Again, it is more
complicated than that. In some tests, the results were better
than expected and in some tests the results were worse than expected.
So, it was not about the pass rate; it was about the pattern.
Ralph Tabberer: There were good
results and there were some weak results, but the anomalies were
sufficient to make us appreciate that there were some things that
have changed within the tests. As we had set up these new tests,
there was a test effect, but we do not know what that effect is
yet and we will not know until we run another pilot. There are
some things related to teacher behaviours changing, including
which children they are putting in for the tests and at what stage.
We do not know how much is down to that factor. There are also
some questions about the level that we are pitching the tests
at. However, it is impossible from this first pilot to separate
out which effect is pushing in which direction. You must also
remember that these are self-selecting schools, so there is no
sense in which they are a national representative sample of the
performance across the country. So, we are having to find our
way through this process quite carefully. We need another round
of results before we know what is going on. We will have a chance
to try another set of tests and then we will be in a position
to make the review of this process available at the end of the
year.
Q383 Lynda Waltho: If the problem
is a higher rate of failure, it might imply that there is perhaps
a discrepancy between the tool of assessment and the teacher judgment
about whether a child has reached a particular stage. If that
is the case, how could we resolve it?
Ralph Tabberer: First, the only
thing we can do is speculatewe are not in a position to
know and we cannot answer whether it is down to a change in teacher
behaviour. If a child does less well in a test, say, you may deduce
that that reflects that the teacher has got the level wrong. However,
we do not know whether teachers behave differently or take a different
approach to progression tests than they would to an external test
at the end of year 6. They might, for example, be pitching for
a child to show a level earlier than they normally would in order
to take part. However, we will not know enough about how people
behave until we review the situation.
Jim Knight: Equally, a Key Stage
2 maths teacher, for example, might be very familiar with Levels
3, 4 and 5, because they deal with those all the time. However,
they might be less familiar with Levels 1 or 7, say. Making the
assessment in those very early daysin December, they were
only two or three months into the pilotand making the right
judgment on whether pupils were ready to take some of the tests,
might have been difficult.
Q384 Lynda Waltho: The Government
have put a lot into single level testing and have stated that
it will be rolled out nationally subject to positive evidence
from the pilot study. That shows quite a lot of confidence. Why
are you backing single level tests so publicly before we have
sufficient evidence from the pilots? I know that you are a confident
man.
Jim Knight: Theoretically, the
progression pilot, single level testing and testing when ready,
accompanied by one-to-one tuition, is compelling. It would be
a positive evolution from SATs, for reasons that we have discussed.
Naturally, we want such a positive evolution to work. If it does
not work, we will not do it.
Lynda Waltho: That was a very confident
answer.
Chairman: Douglas.
Q385 Mr. Carswell: I have four questions.
The first is general and philosophical. There are lots of examples
in society of testing and qualifications being maintained without
the oversight of a state agency, such as university degrees, certain
medical and legal qualifications, and musical grades. I cannot
remember hearing a row about dumbing down Grade 2 piano or an
argument about whether the Royal College of Surgeons had lowered
a threshold. Therefore, why do we need to have a state agency
to oversee testing and assessment in schools? Does the fact that
the international baccalaureate has become more popular in certain
independent schools suggest that some sort of independent body,
which is totally separate from the state and which earns its living
by setting rigorous criteria, is necessary?
Jim Knight: Yes, it is necessary,
which is why we are setting up an independent regulator that will
be completely independent of Government and directly accountable
to Parliament.
Q386 Mr. Carswell: But it will not
earn its living by producing exams that people want to takeit
will be funded by the taxpayer.
Jim Knight: Yes, but the examinations
are absolutely crucial to the future of the country and to the
future of children in this countrymarginally more so, I
would argue, than Grade 2 pianoso it is right to have an
independent regulator to oversee them in the public interest.
However, we should move on from the QCA as it is currently formed,
which is to some extent conflicted, because it is both develops
and regulates qualifications. Because it undertakes the development
of qualifications, it has a vested interest in their success,
which is why we thought that it would be sensible to split them.
We will legislate in the autumn, but we will set up things in
shadow form later this year under current legislation. That means
we will have that independence.
Q387 Mr. Carswell: If the QCA earns
its fees by setting competitive examinations in competition with
other bodies, I have no doubt that it will be setting good tests.
Jim Knight: Would I not then appear
before the Committee and be asked about the over-marketisation
of the education system? People would say that valuable exams
are not being properly regulated or set because there is not enough
of a market to make that particular speciality commercially viable.
Q388 Mr. Carswell: Architects and
surgeons seem to get on okay.
Jim Knight: Yes, but there will
always be a good market for architects and surgeons, but there
may not be for some other important skills.
Q389 Mr. Carswell: Without wanting
to move away from asking the questions, I wonder whether you would
deny the claims of those people who suggest that over the past
15 to 20 years, under Governments of both parties, standards have
dropped. I will give you some specific instances. In 1989, one
needed 48% to get a C grade in GCSE maths. Some 11 years later,
one needed only 18%. That is a fact. Successive Governments and
Ministers have claimed that exam results get better every year.
However, in the real world, employers and universities offer far
more remedial courses to bring school leavers up to standard than
they did previously. International benchmarks show that UK pupils
have fallen behind. Does that suggest that, paradoxically, we
have created an education system that is drowning in central targets
and assessments, but one that lacks rigour? Central control is
having the opposite effect to the one intended.
Jim Knight: You will be amazed
to hear that I completely disagree with you.
Q390 Mr. Carswell: Which fact do
you dispute?
Jim Knight: Ofsted, an independent
inspectorate, inspects the education system and gives us positive
feedback on standards. We also have the QCA, which is an independent
regulator. Although we are strengthening the independence of the
regulation side, the QCA still remains relatively independent.
It regulates standards and ensures that the equivalence is there.
It says categorically that standards in our exams are as good
as they have ever been. Then we have National Statistics, which
is also independent of the Government. We commissioned a report
led by Barry McGaw from the OECD, which is a perfectly respectable
international benchmarking organisation, and he gave A-levels
a completely clean bill of health. What has changed is that we
are moving to a less elitist system. We are trying to drive up
more and more people through the system to participate post-16
and then to participate in higher education. Some people rue the
loss of elitism in the system and constantly charge it with dumbing
down, and I think that that is a shame.
Chairman: You did not answer Douglas's
point about the particular O-level in percentage terms.
Mr. Carswell: In 1989, one needed 48%
to get grade C GCSE maths. Some 11 years later, one needed 18%.
Do you agree or not?
Ralph Tabberer: I do not agree.
The problem with the statistics is that you are comparing two
tests of very different sorts.
Mr. Carswell: Indeed.
Ralph Tabberer: The tests have
a different curriculum, a different lay-out and different groups
taking them. We cannot take one percentage and compare it with
another and say that they are the same thing. That is why we need
to bring in some measure of professional judgment to look at the
tests operating different questions at different times. That is
why in 1996 we asked the QCA to look at tests over time, and it
decided that there were no concerns about the consistency of standards.
In 1999, we asked the Rose review to look at the same thing, and
it said that the system was very good regarding consistency of
standards. In 2003, as you rightly pointed out, we went to international
experts to look at the matter. We put those questions to professional
judgment, because it is so difficult to look at tests.
Q391 Mr. Carswell: Quangos and technocrats
are doing the assessment. The Minister has mentioned three quangos,
so technocrats are assessing performance.
Jim Knight: Look at the key stage
results. Look at Key Stage 2 English, where the results have gone
up from 63% to 80% since 1997. In maths, they have gone up from
62% to 77%. In English at Key Stage 3, they have gone up from
57% in 1997 to 85% in 2007. There is consistent evidence of improvement
in standards. It should not be a surprise, when you are doubling
the amount of money going into the system and increasing by 150,000
the number of adults working in classrooms, that things should
steadily improve. The notion that the improvements are because
things are dumbed down is utter nonsense. The international comparators
are obviously interesting and very important to us. We started
from a woeful state in the mid-90s, and we are now in a much better
state, but we are still not world class. We know that we have
to do better to become world class, and we said so explicitly
in the Children's Plan. We also know that if we do not carry on
improving, we will be left behind, because the international comparators
also show that more countries are entering the league tables and
more are taking education seriously and doing well. Globally,
the competition is out there, and we must respond.
Q392 Mr. Carswell: We may not agree
on that, but there is one area where I think that we agree, because
I agree with what you said earlier about education needing to
respond to changing social and economic circumstances. If the
centre sets the testing and assessment, it is surely claiming
that it knows what is best, or what will be best, and what needs
to be judged. If you have central testing, will you not stifle
the scope for the education system to be dynamic and to innovate?
It is a form of central planning.
Jim Knight: We do not specify
things for the end of Level 4 examinations; we talk about equivalency
in terms of higher-level GCSEs, so if people want to take other,
equivalent examinations, that is fine. The only things where we
specify are SATs, which, as we have discussed, are intended to
provide a benchmark so that we can measure pupil performance,
school performance and national system performance.
Q393 Mr. Carswell: I am anxious about
the way in which the SATs scoring system works. I was reading
a note earlier about the CVA system, which we touched on earlier.
If testing is about giving parents a yardstick that they can use
to gauge the sort of education that their child is getting, that
is a form of accountability, so it needs to be pretty straightforward.
Is there not a case for saying that the SATs scoring system and
the CVA assessment overcomplicate things by relativising the score,
for want of a better word? They adjust the score by taking into
account people's circumstances, and I have read a note stating
that the QCA takes into account particular characteristics of
a pupil. Is that not rather shocking, because it could create
an apartheid system in terms of expectations, depending on your
background? Should it not be the same for everyone?
Jim Knight: There is not an individual
CVA for each pupil, and I do not know what my child's CVA isthe
CVA is aggregated across the school. The measure was brought in
because there was concern that the initial value added measure
was not sufficiently contextualised, that some were ritually disadvantaged
by it and that we needed to bring in a measure to deal with that.
On questions from Annette and others, we have discussed whether
it is sufficiently transparent to be intelligible enough. I think
that the SATs are pretty straightforward. They involve Levels
1, 2, 3, 4, 5, 6 and 7where are you at? That is straightforward
enough. Obviously, you then have 1a, b and c and the intricacies
within that.
Q394 Mr. Carswell: There is all the
contextualising and relativising that means that you cannot necessarily
compare like with like.
Ralph Tabberer: There is no difference
in the tests that different children sit or in the impact of levels.
You could perhaps suggest that the CVA is providing an analysis
that includes a dimension of context, but that is different. The
tests are constant. With the analyses, we are ensuring thatover
the years, through consultations, people have encouraged us to
do thisa variety of data is available, so people can see
how a child or school is doing from different angles.
Q395 Mr. Carswell: A final question:
living in this emerging post-bureaucratic, Internet age, is it
only a matter of time before a progressive school teams up with
a university or employer and decides to do its own thing, perhaps
based on what is suitable to its area and the local jobs market,
by setting up its own testing curriculum? Should we not be looking
to encourage and facilitate that, rather than having this 1950s
attitude of "We know best"?
Jim Knight: We are encouraging
employers to do their own thing and to become accredited as awarding
bodies. We have seen the beginnings of that with the Maccalaureate,
the Flybe-level and Network Rail. We have a system of accreditation
for qualificationsyou might think it bureaucratic, Douglas.
We are rationalising it to some extent, but a system of accreditation
will remain. After accreditation, there will be a decision for
maintained schools on whether we would fund the qualification,
and I do not see that as going away. We will publish a qualifications
strategy later this year setting out our thinking for the next
five years or so. The independent sector might go down that road.
All sorts of qualifications pop up every now and then in that
area. Any other wisdom, Ralph?
Ralph Tabberer: I go back to the
starting point. Before 1988, we had a system whereby schools could
choose their own assessments. We introduced national assessment
partly because we did not feel that that system gave us consistent
quality of education across the system. I do not know of any teacher
or head teacher who would argue against the proposition that education
in our schools has got a lot better and more consistent since
we introduced national assessment. We have put in place the safeguards
on those assessments that you would expect the public to look
to in order to ensure that they are valid, reliable and consistent
over time.
Jim Knight: Obviously, with the
Diplomas it is a brave new world that has started with employers
and with asking the sector skills councils to begin the process
of designing the new qualifications. That has taken place at a
national levelit is not a local, bottom-up thing, but a
national bottom-up thing.
Chairman: We are in danger of squeezing
out the last few questions. I realise that this is a long sitting,
but I would like to cover the rest of the territory. Annette.
Q396 Annette Brooke: We heard a view
from a university vice-chancellor that it is possible that pupils
from independent schools will account for the majority of the
new A* grades at A-level. What is your view on that?
Jim Knight: I tried to dig out
some statistics on the numbers getting three A* grades, which
has been mentioned in the discussionI am told, based on
2006 figures, that it is just 1.2% of those taking A-levels. To
some extent that is on the margins, but we have done some research
into whether, judged on current performance, those getting A*
grades would be from independent or maintained schools, because
we were concerned about that. We believe in the importance of
adding stretch for those at the very top end of the ability range
at A-level, which is why we brought in the A* grade. However,
we were conscious of worries that it would be to the advantage
of independent-sector pupils over maintained-sector pupils. The
evidence that has come back has shown the situation to be pretty
balanced.
Q397 Chairman: Balanced in what sense?
Ralph Tabberer: We have looked
at the data, following the vice-chancellor's comment to the Committee
that around 70% of those getting three A*s would come from independent
schools. From our modelling, we anticipate that something like
1,180 independent school pupils would get three or more A*s from
a total of 3,053, so 70% is far from the figure that we are talking
about.
Annette Brooke: We have to wait and see.
Jim Knight: Yes.
Annette Brooke: That sounded a very reasonable
hypothesis.
Q398 Chairman: The examination boards
also said that A*s will make sure that fewer kids from less privileged
backgrounds get into the research-rich universities. That is what
they said. Although you are not responsible for higher education,
Minister, every time you put up another barrier, bright kids from
poorer backgrounds are put off from applying. You know that.
Jim Knight: Yes. We had some concerns
about that, which is why we examined actual achievement in A-level
exams, and we were pleased to see that particular result. The
difficulty when we took the decision, just to help the Committee,
was that we could see the logic in providing more stretch at the
top end for A-level. The issue was not about universities being
able to differentiate between bright A-level pupils, because we
can give individual marks on modules to admissions tutors; it
was genuinely about stretch. Should we prevent pupils, in whatever
setting, from having that stretch, just because of that one worry
about independent and maintained-sector pupils? We took a judgment
that we had a bigger responsibility than that, which is to stretch
people in whatever setting they are in. That is why we took that
decision, but we were reassured by the evidence that the situation
is not as bleak as Steve Smith, whom we respect hugely as a member
of the National Council for Educational Excellence, might have
at first thought.
Q399 Annette Brooke: I disagree with
you on that point. Many years ago, for my generation, we had S-levels,
so you had the opportunity for stretch. Why do we need this so
tied in? Maybe sometimes we should put the clock back.
Jim Knight: We had S-levelsI
was very pleased with my Grade 1 in geography S-level. We replaced
those subsequently with the Advanced Extension Award, in which
I was delighted by my daughter's result. However, not many people
took them, and they were not widely acceptedthey did not
seem to be a great success. S-levels were extremely elitistit
was fine for me, in my independent school, to get my Grade 1.
I am sure that you did very well in whatever setting you were
in.
Annette Brooke: I did not go to an independent
school.
Jim Knight: They were introduced
in the era of an elite education system. Integrating something
into the A-level is a better way forward than introducing something
marginal.
|