Memorandum submitted by Edexcel
INTRODUCTION
This response is in two parts:
Part 1 describes who we are and what we do.
Part 2 provides responses to those specific
questions posed by the Committee for which we have most direct
experience. We have addressed issues regarding testing and assessment
pre and post 16 together.
PART 1: WHO
WE ARE
AND WHAT
WE DO
1.1 Edexcel is one of the largest awarding
bodies in the UK and a Pearson company. It offers a wide range
of academic and vocational qualifications, testing and assessment
services and associated products and support aimed at helping
teachers to teach and students of all ages to learn and get on
in their lives.
1.2 Qualifications offered by Edexcel include
GCSEs and A levels, Key and Basic Skills, NVQs, professional qualifications
and the BTEC qualification suite. In the UK, Edexcel qualifications
are taken by over 4,200 secondary schools, 450 colleges, 80 Higher
Education (HE) institutions, 800 public and private sector employers
and a number of e-learning providers. Internationally, Edexcel
operates in over 100 countries.
PART 2: EDEXCEL
RESPONSES
2.1 Why do we have a centrally run system
of testing and assessment?
2.1.1 Since the establishment of the National
Curriculum, testing has been a key central mechanism for driving
up standards in schools.
2.1.2 Testing was the mechanism for making
schools accountable in the drive to raise standards.
2.1.3 The need for improvement when compared
to standards elsewhere in the world was high.
2.1.4 The issue is that forms of assessment
that raise standards are not the same as those that are right
for accountability.
2.1.5 Comparability became possible as all
pupils were doing the same test at the same time.
2.2 What other systems of assessment are in
place both internationally and across the UK?
2.2.1 Within the UK, it would be useful
to examine the approach to testing in Scotland.
2.2.2 International comparison studies are
available, particularly TIMSS and PISA.
2.2.3 The approach in Norway is to make
assessment a major part of professional development. This approach
differs from England and Wales in that national assessment is
an event that is external to the school and hence leads to being
something done to schools as apposed to the school being a part
of the process.
2.2.4 The USA makes extensive use of testing.
The difference between the UK and the USA is that of validity
and reliability. In the USA the emphasis is on high reliability
whereas our emphasis is on high validity. Whilst this is a generalisation
and there are notable exceptions in both countries, the assertion
is general true.
2.2.5 There are many commercial assessment
instruments available to schools. The oldest and one of the most
respected are the NFER standardised tests. Durham University has
over the last few years established itself as a major provider
of school tests.
2.3 Does a focus on national testing and assessment
reduce the scope for creativity in the curriculum?
2.3.1 In the main, yes; this is a function
of the nature of the National Curriculum. How restrictive a curriculum
do we want? The more prescriptive the curriculum, the more restrictive
will be the assessment.
2.3.2 There is a cost/benefit issue here;
creativity is the cost of a prescriptive National Curriculum which
has the benefit of being an effective driver for accountability.
2.3.3 Too much weight on the outcomes of
assessment can damage creativity. The emphasis for the school
can become understandingly, the test. Creativity, by its nature
is not known to flourish during a timed test.
2.4 Who is the QCA accountable to and is this
accountability effective?
2.4.1 The QCA is accountable to the DfES.
There is an argument that testing and assessment should also be
subject to independent scrutiny.
2.5 What role should exam boards have in testing
and assessment?
2.5.1 The key difference between national
tests and GCSE, GCE, diplomas, BTEC is that the latter are qualifications.
Exam boards have a wealth of assessment expertise and could have
a role in formative assessment. They have a global view of assessment
over a large range of qualifications; they are well placed to
position national testing into the gamete of predictive assessment
and comparability.
2.6 How effective are the current Key Stage
tests?
2.6.1 There is no single answer to this
question.
2.6.2 They are fairly effective as curriculum
drivers.
2.6.3 They are very effective at assessing
part of the curriculum but there are aspects that are not susceptible
to a timed test of short questions.
2.6.4 As a measure of progress for a school
they can mask the many unexplained variables which may be making
a significant contribution to a schools performance.
2.6.5 The introduction of national tests
improved standards. It is less clear as to whether a plateau has
been reached as to the contribution that national tests can make
to further improvement.
2.6.6 The tests have been effective at raising
standards through accountability. The future for raising standards
may be a combination of standard tests and an assessment for learning
approach.
2.6.7 As valid and reliable tests, they
are the best that can be achieved in their current format.
2.6.8 The tests have been effective in bringing
a common expectation of teacher performance. The questions remains,
is it the right expectation?
2.7 Do they adequately reflect levels of performance
of children and schools, and changes in performance over time?
2.7.1 They adequately reflect performance
on short written tests; to what extent this reflects levels of
performance for all children could be questioned.
2.7.2 They are good at giving a level across
each curriculum subject. They do not give enough detail of each
particular part of the curriculum.
2.7.3 They are good at reflecting the performance
of schools.
2.7.4 There are too many expectations for
one test. They do not support the subdivision of attainment into
smaller levels. It would be useful for the Committee to revisit
the Task Group for Assessment and Testing Report which gives a
different complexion to the use of levels.
2.7.5 Whilst the tests give a useful indication
of changes in performance over time, they are not suited to influencing
major decision making. The curriculum has changed over time, new
elements have been introduced and different approaches rewarded.
To accurately measure such progress, the curriculum would need
to be stable and the same test used each year.
2.8 Do they provide assessment for learning
(enabling teachers to concentrate on areas of a pupil's performance
that needs improvement)?
2.8.1 Sometimes they are good indicators
of areas that a teacher should concentrate on. However, they are
not diagnostic and many areas of understanding are not covered
in a particular test. So they are not sufficient as the sole indicator
of pupil performance.
2.9 Does testing help to improve levels of
attainment?
2.9.1 Yes, indirectly through accountability.
2.9.2 Measurement on its own is not sufficient,
levels of attainment improve when performance is targeted by teachers.
2.9.3 A consequence of testing is to narrow
progression for schools that over prepare children for a particular
level on a test.
2.10 Are they effective in holding schools
accountable for their performance?
2.10.1 To some extent but there are possibilities
for schools to use the value added indicator tactically by ensuring
that there is adequate attainment in early tests and maximum attainment
in a final years test. This practice is not widespread but is
a reaction to what is perceived to be a strategic response to
high stakes testing.
2.10.2 The tests report performance around
the average; the extremes of performance are not obvious. 70%
at Level 4 and above could mean both 70% at Level 4 or 70% at
Level 5.
2.10.3 The tests, by their nature, provide
a crude measure and it is easy for schools to find themselves
in a comfort zone.
2.10.4 They only hold schools accountable
for English, mathematics and science.
2.11 How effective are performance measures
such as value-added scores for schools?
2.11.1 Value added is a useful measure.
2.11.2 There are ceiling effects for some
schools; can you make significant improvement for ever? Low intakes
make consistent performance a factor of the ability of a particular
cohort of children.
2.11.3 The value added measure is not a
transparent process; it is not easy to judge the validity of the
variables.
2.12 Are league tables based on test results
an accurate reflection of how well schools are performing?
2.12.1 They do not present the full picture.
They provide a crude score of particular areas of a school's provision.
That is not to say that they are not a useful measure; however,
it would be misleading to use them as the single predictor of
performance.
2.13 To what extent is there "teaching
to the test"?
2.13.1 Everywhere, but that can be a positive
thing. The test assesses the curriculum and so teaching to the
test is teaching to the curriculum.
2.13.2 The problem is that short answer
tests do not assess all parts of the curriculum. So excessive
teaching to the test narrows the curriculum experience for the
pupils.
2.13.3 Teaching to the test distorts the
curriculum when taken to the extreme.
2.13.4 It is acknowledged that tests can
define a curriculum but they are not the most appropriate driver
for ensuring a comprehensive teaching approach.
2.14 How much of a factor is "hot-housing"
in the fall-off in pupil performance from Year 6 to Year 7?
2.14.1 Many pupils are looked after at the
end of Key Stage 2 with one to one attention to address areas
in which they are having difficulties; so hot-housing is a factor.
2.14.2 It is not the only factor and may
not be the most significant. Other to consider are: lack of expectation
by year 7 teachers; lack of use of the information from the Key
Stage 2 school; and, very importantly, the change of social situation
for the pupils. In addition, pupils themselves are beginning to
change.
2.15 Does the importance given to test results
mean that teaching generally is narrowly focused?
2.15.1 On balance, yes for the core subjects;
however, the introduction of end of Key Stage tests widened the
curriculum for many pupils.
2.16 What role does assessment by teachers
have in teaching and learning?
2.16.1 What role does teacher assessment
play or what role should teacher assessment play? There is considerable
variance within classrooms.
2.16.2 Ofsted says that teacher assessment
does not play a significant role at present. This may be because
assessment has been taken out of the hands of teachers; it is
something that is done to the pupils from outside.
2.16.3 For some, teacher assessment is undertaken
by mimicking the national tests; this is not the most productive
way of using the opportunities that teachers have in the classroom.
2.16.4 The link between teacher assessment
and learning needs to be strengthened; this will not be the case
unless teacher expectation is that they are in control of formative
assessment.
2.17 Should the system of national tests be
changed?
2.17.1 On balance: yes.
2.18 If so, should the tests be modified or
abolished?
2.18.1 National testing has too many purposes
attributed on one test experience.
2.18.2 A national picture of standards could
be found by sampling pupils.
2.18.3 For formative assessment; instruments
should be provided that help teachers address the different aspects
of the curriculum. Good formative assessment which influences
learning will raise standards.
2.18.4 Teachers need to be trained in assessment
techniques and interpreting assessment outcomes. Teachers should
be doing assessment not administering an external test.
2.18.5 Tests which have been standardised
should be an important addition to teacher assessment. The administration
of such tests should be in the hands of the school.
2.18.6 Schools should be accountable. Assessments
should be moderated and schools should be able to demonstrate
progress and that they are raising standards. For standards to
rise within a school, there needs to be attention to assessment
outcomes, appropriate teaching, well developed curriculum guidelines
and social structures, such as behaviour. All these aspects should
be monitored by Ofsted.
2.19 The Secretary of State has suggested
that there should be a move to more personalised assessment to
measure how a pupil's level of attainment has improved over time.
Pilot areas to test proposals have just been announced. Would
the introduction of this kind of assessment make it possible to
make an overall judgment on a school's performance?
2.19.1 Could be, it depends on the nature
of the personalised assessment. Single level progress tests will
not be sufficient to judge personal progress. Such assessments
could lead to a distortion of the curriculum as schools focus
on a competency approach to pupil performance.
2.19.2 We would support measures that incorporated
a tool kit of assessment opportunities for teachers. These would
include standardised tests and assessments when ready.
2.20 Would it be possible to make meaningful
comparisons between different schools?
2.20.1 Yes, but a meaningful comparison
would be more than performance tables of attainment on single
level tests.
2.21 What effect would testing at different
times have on pupils and schools? Would it create pressure on
schools to push pupils to take tests earlier?
2.21.1 More than likely it would increase
the testing burden.
2.21.2 At the end of a key stage the focus
of the curriculum becomes narrowed as pupils are prepared for
the test. This will be compounded by more frequent test exposure.
2.21.3 This can be ameliorated by a test
design that complements teacher assessment.
2.22 If Key Stage tests remain, what should
they be seeking to measure?
2.22.1 They should measure pupil attainment
at the end of the key stage across as much of the programme of
study as is appropriate for the test structure. As such they will
give a national picture of standards.
2.22.2 Sampling would be sufficient and
could identify trends and patterns.
2.22.3 School accountability should be by
more intensive assessment measures as described above and moderated
by Ofsted.
2.23 If, for example, performance at Level
4 is the average level of attainment for an eleven year old, what
proportion of children is it reasonable to expect to achieve at
or above that level?
2.23.1 It depends on how wide the band for
average is to be. As with all things in this area, policy will
dictate not pupil performance.
2.23.2 The level descriptions are meaningful,
but their interpretation has been narrowed down to match expectations
in the tests. It would be useful for the Committee to refer back
to the Task Group on Assessment and Testing report which established
the original 10 level scale.
2.24 How are the different levels of performance
expected at each age decided on? Is there broad agreement that
the levels are appropriate and meaningful?
2.24.1 They have become, de facto, the accepted
levels as policy documents repeatedly stated the level of attainment
for the average pupil.
2.24.2 They were originally standardised
using teacher judgement and, once established, have to be maintained
if comparison over time is to be meaningful.
2.24.3 The issue is, if standards rise and
Level 4 remains average performance, then the difficulty of Level
4 needs to increase accordingly.
INTRODUCTION TO
RESULTSPLUS
In 2007, Edexcel will roll out a programme to
help schools raise exam attainment and meet the personalised learning
agenda. This may be of interest to the Committee as part of its
inquiry into testing and assessment.
The programme, called ResultsPlus, will provide
personalised information on exam performance to GCSE and A Level
students, and to their teachers and head teachers. This has major
implications as it will empower students and teachers with a new
range of transparent and accessible information.
ResultsPlus represents a leap forward in personalised
learning in the UK. This is made possible because Edexcel's digital
ePen technology, which allows completed exam papers to be marked
by trained markers on screen, is also able to produce a range
of data based on exam performance.
ResultsPlus comprises four IT products:
ResultsPlus Skills; and
RESULTSPLUS
DIRECT
In summer 2007, all students of Edexcel GCSE
and A Levels will be able to receive their results online for
the first time via ResultsPlus Direct.
The results will feature a Gradeometer which
will show students how close they were to the next grade up or
down. This information provides transparency in the exam process
and allows students and their parents to make informed choices
about applying for the exam to be re-marked, or re-sitting.
This system was successfully piloted in 2006,
when Edexcel provided 2,000 GCSE Maths students with their results
online.
ResultsPlus Direct will allow students to go
online from wherever they are in the world on results day and
access their results using a unique PIN number.
In the traditional process, schools and colleges
post lists of results on a notice board. With a secure online
system, each student will see only their own results. Market research
shows that 74% of people think that exam results should be available
via the Internet.
RESULTSPLUS
ANALYSIS
In summer 2007, Edexcel will offer head teachers
and school management teams a new resource, ResultsPlus Analysis.
It will provide analysis of results and performance
at a cohort and individual student level. It will allow teachers
to produce comprehensive reports to ascertain how the syllabus
is being delivered and achieved against. If a group of students
have not performed well in an area of the syllabus, ResultsPlus
Analysis will highlight the problem and teachers will be able
to adjust their teaching accordingly.
Edexcel will provide access to results information
down to individual question level, as well as providing links
to the examination papers, mark schemes and chief examiners' reports.
This enables centres to compare their results against the national
average, compare results by type of centre, download results data
onto a spreadsheet and sort results by teaching group or gender
and make detailed observations about students' performance.
This builds on Edexcel's Results Analysis service
(RAS), which already allows schools and colleges to access their
results at question level online.
RESULTSPLUS
SKILLS
In addition to the performance information offered
in ResultsPlus Analysis, ResultsPlus Skills will provide skills
maps, so teachers will be able to see at a glance which topics
and skills are causing their students problems.
By putting performance data into context, the
skills maps will enable teachers to alter teaching programmes
to raise attainment. For students who need to re-sit exams, their
skills map can form the basis of a revision plan.
ResultsPlus Skills will be available when Edexcel's
GCSE Maths and Science results are delivered in August 2007.
RESULTSPLUS
PROGRESS
ResultsPlus Progress will be introduced in autumn
2007 as online tests that will allow teachers to check the progress
of their students' learning and identify areas of weakness that
may require further teaching or revision.
Test results will be provided with skills maps
for each candidate, tailored to identify their own strengths and
weaknesses. This will help students plan their own revision and
help teachers plan lessons more effectively and concentrate on
weak areas. Using individual performance information to guide
individual progress is at the heart of the personalised learning
agenda.
ResultsPlus Progress will be available for Edexcel's
Key Stage 3 Mathematics, GCSE Mathematics and 360 Science subjects
from the start of the 2007 academic year.
June 2007
|