Memorandum submitted by the National Association of Head Teachers (NAHT)

 

 

GENERAL ISSUES

 

Why do we have a centrally run system of testing and assessment?

 

It is essential, first of all, to make the key distinction between assessment and testing:

 

Assessment lies at the heart of all reaching and learning and is the fundamental professional activity of any teacher. It enables them to establish the performance and understanding of their students, to assist with ongoing learning and development.

 

Testing covers the final, standardised awarding of an agreed qualification or level at a particular point. This applies to the SATs as well as to such qualifications as GCSEs, A levels etc.

 

It is where these two activities are not distinguished from each other that confusion and difficulties arise.

 

It must be recognised that the British centrally-run system of testing and qualifications at the end of compulsory education and beyond is respected internationally. Although there are ongoing difficulties in the way in which these qualifications evolve over time, there is no-one calling for the wholesale abolition of this highly valued system. However, the rationale for the current centrally-run test system stems for the Government's standards agenda, with its associated regime of targets, tests and league tables.

 

The current arrangements by which children are tested according to national tests are viewed as burdensome and damaging. A review of this system and of the narrow rationale securing it is of paramount importance.

 

 

What other systems are in place both internationally and across the UK?

 

Every school has its own arrangements for internal assessment, many highly praised during Ofsted inspections and many reflecting the skills of the teaching workforce. As part of the National Strategies, a focus on "Assessment for Learning" has proved to be of great value in enabling teachers to track and support students through their learning journey.

 

It is where these activities become directed solely to successful "passing the SATs" that they become weakened and potentially damaging.

 

In many of the countries who have been rated highly through such international projects as PISA, formal education begins later than in the UK, and there is no such systemised arrangement for formal tests. More recent information from countries such as Holland, Finland and Denmark suggests that there is a greater emphasis upon play and creativity at younger ages, formal schooling begins later, teachers have greater autonomy and the system of national testing and assessment is far less draconian, if it exists at all. Certainly there is no high stakes testing or publication of league tables and there is an acceptance that children develop in different ways and at different rates.

 

It is also worth noting that, in Wales, a decision was taken in 2005 to make key stage 2 tests optional and abolish league tables. Instead, the system is predicated on assessment of an individual's attainment and progress, rather than on accountability within the system, as in England.

 

 

Does a focus on national testing and assessment reduce the scope for creativity in the curriculum?

 

At its best, creativity releases the child from the rigid, formal framework of the national curriculum, to be able to explore and investigate in a holistic and practical mode, the wonders of the world around him or her. This approach, however, has to be extremely well structured and organised by the teacher and the school, as a framework of essential skills and knowledge, needs to underpin the curriculum so that the child is able to develop his or her creativity. The professional activity of ongoing assessment and understanding of a child's development will never reduce the scope for creativity. Rather, the encouragement by a skilled adult will nurture creative development of children through the early years.

 

If the time and energies of teachers, parents, and children are dominated by a narrow syllabus and a narrow range of activities which will be the subject of high stakes testing, we run the risk of this dominating the curriculum and this may well lead to a narrowing of opportunity. If children are straitjacketed by "teaching to the tests", whether this be at KS1, KS2 or KS3, there will not be time for the normal, essential creative development which needs to be a part of the whole educational experience.

 

 

Who is the QCA accountable to and is this accountability effective?

 

The brief of QCA is "to regulate, develop and modernise the curriculum, assessments, examinations and qualifications." It is described as "a non-departmental public body, sponsored by the Department for Education and Skills (DfES). It is governed by a board, whose members are appointed by the Secretary of State for Education, and managed on a day to day basis by an Executive Team."

 

In its regulatory capacity, its role is to ensure that the Awarding Bodies adhere to the clear rules relating to their examinations and, from time to time, conduct appropriate reviews of this work. It is for QCA to take on this role, to ensure that the trust which has built over time can continue. In this capacity, QCA is highly effective.

 

In terms of its role as developer and moderniser of the curriculum, QCA is extremely careful to involve all key stakeholders in its reviews and to use the expertise of the teaching profession, through a wide range of organisations. The integrity and skill of QCA officials is generally appreciated and respected by the education professionals.

 

The QCA is given clear remits relating to aspects of its work by the DfES and, where there can be frustrations expressed, it is largely because the remit does not necessarily give QCA sufficient freedom in aspects of its work. QCA offers sound professional advice to the DfES but the Secretary of State for Education is not bound to listen and follow this advice. However, there have been circumstances where QCA has offered strong recommendations for caution (e.g. over the abolition of coursework in GCSE) and the DfES has asked QCA to undertake further work.

 

QCA is generally effective but there are potential dangers in that it is so strictly controlled by the DfES that all it is empowered to do is offer advice.

 

 

What roles should exam boards have in assessment and testing?

 

The Awarding Bodies are highly respected for their work in ensuring that standards are maintained in external qualifications over time. In spite of recurrent negative publicity each August, there is evidence that employers, teachers, parents and pupils have great confidence in the qualifications that are offered and awarded. Even the GCE "A" levels have returned to their former status following the debacle of Curriculum 2000.

 

The ongoing development of e-testing, the development of the new diplomas, the support for teachers and students in working through the new GCSEs and other qualifications are aspects for which the Awarding Bodies are given due credit.

 

Questions about the role of coursework, the viability of the new Diplomas, the risks inherent in the greater use of the internet for research and risks of plagiarism and the issues relating to the increased costs of examination entries for schools and colleges, all need to be viewed in the context of general recognition that the Awarding Bodies are successful as providers of a tried and tested system.

 


NATIONAL KEY STAGE TESTS

 

Current situation

 

How effective are the current key stage tests?

 

The current key stage tests dominate the work of Primary schools and, for Secondary schools, during key stage 3. This is not healthy. As with any summative assessment system, the key stage tests only give a snapshot of a pupil's ability at a specific time and in a relatively narrow field.

 

The programmes of study and the range of the curriculum are not, in themselves, damaging, but the emphasis on the outcome of the tests means that the focus of much of the teaching, in particular in year 6 and in year 9 is on test performance and likely content. This is clearly insufficient and narrows the range of what is offered.

 

The current key stage tests are effective in testing the prescribed content and the schools' effectiveness in preparing children to undertake these tests. They are not effective in testing either pupils' broader range of educational achievement nor in testing the success of a school (except in its success in preparing pupils for the tests!) There is also a growing body of evidence that the plethora of testing 'windows' is having a detrimental effect on individual children's health and well-being.

 

 

Do they adequately reflect levels of performance of children and schools, and changes in performance over time?

 

The key stage tests provide one source of helpful performance data for both students and teachers. Because the NAA draw on long-term, tried and tested skills which ensure that standards are maintained over time, the tests could be used as one broad indicator but it is hazardous to draw too many conclusions from the minutiae of the detail. A teacher's professional knowledge of the pupil is vital - statistics are no substitute for professional judgement.

 

As an overall national standard, statistically the tests are valid. Because of the small size of many of the individual school cohorts, where a single pupil may count for more than 15% of the overall score, the statistical validity of this data is severely limited. The tests only test one aspect of educational performance and need to be recognised as a single item of data, to be taken professionally alongside many other elements. Care needs to be taken over the interpretation of data - over-simplified interpretation can lead to flawed conclusions. Any use of data should be as an indicator, rather than a determinator.


Do they provide Assessment for Learning (enabling teachers to concentrate on areas of a pupil's performance that needs improvement)?

 

The key stage tests do have a value in giving teachers an indication of pupil performance and will provide some of the data which is helpful in enabling a teacher to understand the performance of the students. However, they only provide one measure and need to be treated in this respect.

 

Assessment for Learning is far broader than the key stage tests and information must be gleaned on an ongoing basis, from day to day course and schoolwork, and not from one measure, operated at identifiable points in a child's career, for which they may well have been overprepared. Assessment in the normal process presupposes the collection of information over a period of time rather than relying upon a snapshot of attainment, in order to ascertain where pupils are and plan where they need to go. Assessment for Learning is a broad principle, far wider than feedback from snapshot national tests and countless schools have developed sophisticated pupil tracking systems through it.

 

 

Are they effective in holding schools accountable for their performance?

 

The key stage tests represent only one measure of performance. Schools have a wide range of accountability measures, ranging from financial benchmarking through to full Ofsted inspections.

 

The development of the self-evaluation systems which take account of key stage test results, alongside other professional educational data, is far more reliable than the one-dimensional picture which is offered by the SATs. Schools now have the tools and are continuing to develop expertise and experience in self-evaluation and they need to be trusted to get on with the job.

 

 

How effective are performance measures such as value added scores for schools?

 

Value added measures are part of the rich array of professional data available to schools, local authorities, SIPs and OfSTED. To some extent they help to provide a context within which the narrow SAT information can be viewed. All elements of professional educational data has its place, but it is to be used in conjunction with other information, to pose hypotheses and lead to professional discussion about school improvement, rather than to make rigid judgements or be used to draw simplistic and potentially inaccurate conclusions. Whilst the principle behind value-added scores is reasonable, there is still disquiet about the validity of data in different contexts. Although the value-added data is in the public domain, its complexity is such that, at best, it remains meaningless to the majority of its readers. At worst, it is open to misuse and abuse.


Are league tables, based on test results, an accurate reflection of how well schools are performing?

 

League tables are hugely damaging to the educational system. They only use one of the many types of measures which should inform understanding of the context and the success of a school and its pupils. They should never be used to make simplistic comparisons between different schools, in different areas, teaching a different cohort of pupils. They should never be viewed as a total measure of any school.

 

League tables based on test results will only ever indicate how a school has enabled its pupils to perform in those particular tests and this can never give a full indication of how effective the organisation is in offering a wide, broad and appropriate education to those young people in its charge. Even modified by social deprivation or value added factors, they can only give a distorted snapshot of the work of a vibrant and organic community.

 

 

To what extent is there "teaching to the test"?

 

Because of the external focus on the results of SATs, there is far too much "teaching to the tests". Recent survey evidence indicates that, at year 6, for 4 months of the school year, schools are spending nearly half their teaching time preparing pupils for key stage 2 tests.

 

This has been actively encouraged by the DfES through the provision of "booster classes" and through the requirement to produce "intervention plans". These boosters and interventions have not necessarily been used as professional development plans for the wider education of children. Instead, they have had the prime focus of ensuring that a small identifiable cohort of children will be "boosted" to achieve a higher grade on the narrow range of work relating to particular tests.

 

This emphasis has narrowed the focus of the curriculum and introduced professional fear into the work of both headteachers and individual class teachers. A headteacher's or a Year 6 teacher's career can be blighted by a single poor performance (for whatever reason including the unfortunate absence of a couple of bright pupils). As referred to before, because of the relatively small cohort tied into any one school's results, the statistical validity of any test is flawed.

 

Very few teachers have the confidence to take risks and introduce dynamic and entirely appropriate rich activities with students approaching the SATs, if the content appears not to relate directly to that which will be examined.

 

 


How much of a factor is "hot housing" in the fall off in pupil performance from year 6 to year 7?

 

A pupil who has been coached emphatically and successfully to achieve a grade higher than they would naturally have obtained, may well, when coping with the pressures of transfer to a new and more adult environment, appear to have "slipped a level".

 

There is also a danger, reported by many professionals, that students may learn how to succeed in a particular type of test, which can give a distorted picture of their broader ability. There are many examples of year 6 students who have obtained high levels, particularly in Science SATs, who are not able to replicate this performance within the secondary curriculum. The results are not wrong. They merely indicate that the students have learned how to pass Science SATs and not developed scientific skills and absorbed scientific content. This can be extremely unhelpful for the receiving secondary school.

 

Another huge danger is that the "hot housing" may not be a stimulating activity and that this may have a damaging effect on the morale of the student. If booster classes and repetitive test practice activities are boring and continue to offer more of the same to the student, they are unlikely to foster a love of learning such as could be engendered by a rich and creative curriculum.

 

If pupils are not force-fed a diet of SATs, they may well also be able to prepare more broadly for the transition to the very different environment of secondary school.

 

 

Does the importance given to test results mean that teaching generally is narrowly focused?

 

Yes, see above. Recent studies have concluded that the standards agenda has focused teachers' attention to the detriment of the rest of the curriculum.

 

 

What role does assessment by teachers have in teaching and learning ?

 

Assessment by teachers lies at the heart of all teaching and learning. Assessment may be formal and thorough, or brief and effective and undertaken through oral or other processes. Not all recent developments have been unhelpful in this respect: for instance, the teacher assessment component of national assessment prepared teachers for Assessment for Learning and the emphasis on personalised learning.

 

Every teacher is assessing the performance of his or her pupils at every point in the teaching and learning activity. It may not be formal; it may not be extensive but at every point, information about what a child knows or does not know is used by the skilled teacher in making decisions about the advice to be given, the encouragement to be given and the ongoing educational needs of the pupils. True personalised learning depends on skilled ongoing assessment by the teacher and on skilled self-assessment by the pupil.

 

It is vital that we do not confuse assessment with formal test structures.

 

 

THE FUTURE

 

Should the system of national tests be changed?

 

The tests themselves are not inherently the root of the problem. It is the emphasis and use of the results that has done and continues to do the damage. The high-stakes nature of the process is that which is leading to the skewing of the curriculum and the stress which is unhelpful and unhealthy for students and their teachers. The majority of our members do not have an issue with the principle of testing. The crucial issue remains the high stakes nature of the process and the emphasis on published league tables, coupled with the linking to inspection outcomes

 

League tables need to be abolished and it needs to be recognised that SATs only offer one of many elements by which a school and its success should be evaluated. The current system needs to be changed. Whether or not the tests themselves need to be fundamentally revised is a totally different question.

 

 

If so, should the tests be modified or abolished?

 

League tables should be abolished as should the unhealthy emphasis on a single outcome measure.

 

If the current arrangements are significantly modified along the lines indicated above, a review of the content and style of the tests can be undertaken in a professional and non-emotional professional activity, through thorough and appropriate consultation with all interested parties. This consultation needs to be open and transparent, involving all interested parties and must look at the nature of and the rationale behind the continuation of testing.

 

 

The Secretary of State for Education has suggested that there should be a move towards more personalised assessment to measure how a pupil's level of attainment has moved over time. Pilot areas to test proposals have just been announced. Would the introduction of this kind of assessment make it possible to make an overall judgement on a school's performance ?

 

The proposals included in the "Making Good Progress" consultation would lead to a different data set which can be used by schools. This would be different information which would have its own value used in a professional context. There is no reason to assume that this different data set would be any more accurate or any less damaging than the current data set if taken in isolation. Any overall judgement of a school's performance would be no more infallible and no less misleading than current information.

 

The new proposals are based on an assumption that a young child should make a particular path of progress at a particular rate. Children learn in different ways and at different rates. The underlying assumption, that there is an optimum and fixed rate of progress over time for all pupils, is flawed. The danger is that one inadequate measure may be exchanged for another. As stated previously, data provides an indication of knowledge and progress, it is not a definitive determinator.

 

However, as professional data, the information drawn about pupil performance from tests taken "when ready" will have significant value to the school and will fit with other elements of data to assist with school improvement, pupil support and true assessment.

 

 

Would it be possible to make meaningful comparisons between different schools?

 

No. If the pupil is put at the centre of learning, rather than maintaining the current system of school accountability, then the data gives assistance to the planning and developing of the learning for the pupil. It does not support the comparison between different schools.

 

 

What effect would testing at different times have on pupils and schools? Would it create pressure on schools to push pupils to take tests earlier?

 

It is not possible to guess with accuracy what the impact of the new style of tests might be. There will be schools where students are encouraged to take tests early. There may be other schools where students are encouraged to take tests at a later point when they are more likely to have perfected their performances in the named activities. Teachers, parents and students will learn the rules of the new game over time.

 

There may also be logistical difficulties in some primary schools if the testing has to take place over a longer period of time and there could potentially be greater costs and more disruption to the curriculum. Consideration must be given to the issues for pupils with special educational needs. The P levels used are not suitable for any summative approach.

 

 

If Key Stage tests remain, what should they be seeking to measure?

 

Key stage tests should be used to test the skills itemised within the related programmes of study. They should be used within schools as internal professional data to assist in the process of individual pupil progress and overall school improvement. They should not be used to provide league table data.

 

It should be possible to develop a bank of external tests which can be used when a school feels that the pupil is ready. These tests should be linked to relevant programmes of study, should be skills-based and should be used solely for schools' internal data collection and analysis. This would enable cohort sampling to be built into this to help inform national trends from time to time.

 

 

If, for example, a level 4 is the average for an 11 year old, what proportion of children is it reasonable to expect to achieve at that or above that level?

 

Children learn at different rates and in different ways. Some 11 year olds will have far exceeded a level 4, whereas others may need longer to arrive at their destination. What is important is that schools encourage and support pupils to make the progress which they, as individuals, need to make. Local approaches to formative assessment and pupil progress measurements are, in most settings, highly effective. Schools are only too aware that children do not always progress in a regular, linear manner.

 

We must not label as failures 11 year olds who learn more slowly or who have skills in different aspects which cannot be described in such concepts as "level 4". What is a level 4 Happiness or a level 5 Social Responsibility? How can we expect a certain, arbitrary percentage to succeed or fail? More importantly, why should we?

 

 

How are the different levels of performance expected at each age decided on? Is there broad agreement that the levels are meaningful and appropriate?

 

The current descriptions and levels relate to one narrow aspect of the educational and curricular experience. If they are agreed to be criterion-referenced measures relating to specific programmes of study, then it is possible to decide which children have achieved the desired level. The mistake that is too often made is to assume that the output data relates to a far broader range of skills. It does not.

 

 

TESTING AND ASSESSMENT AT 16 AND AFTER

 

Is the testing and assessment in "summative" tests (e.g. GCSE AS A2) fit for purpose?

 

The current "summative" tests and qualifications at age 16 and after are generally respected and regarded as fit for purpose. There are a number of modifications due to come into force from September 2008 and these have been the subject of professional consultation.

 

While there are some aspects which will continue to need to be modified to keep up with wider developments, generally GCSE, AS and A2 are not in need of major imposed revisions. Answers to other questions will give further information relating to those aspects which need to be kept under review.

 

 

Are the changes to GCSE coursework due to come into effect in 2009 reasonable? What alternative forms of assessment might be used?

 

The concerns and the media furore about coursework and the inevitable increase in plagiarism as a result of the accessibility of materials on the internet have been largely out of proportion to the potential difficulties to the system and takes no account of the changed learning patterns and environment that students have today.

 

Where a student has copied large quantities of material from the internet, the teacher is usually able to detect the fraud. Discrepancies in the pupils' style, poor blending of the plagiarised material with the student's own work, and teacher common sense will largely reduce the impact of this growing trend. It is not new. Pupils have always tried to use extraneous material (and where does research end and plagiarism begin?). English teachers have long been accustomed to challenging the inappropriate use of other materials in student essays.

 

The initial reaction to get rid of coursework was inappropriate and draconian. Thankfully, a more balanced approach has been adopted since and, by treating each of the different subject disciplines at GCSE in different ways, an appropriate solution appears to be on the horizon.

 

Coursework will always be an entirely appropriate and important part of any student's work throughout study. Whether or not the coursework becomes part of the summative test which gives the final grade for the qualification is another matter. It may be that coursework could be a part of the teacher assessed element. Alternative approaches are being considered as part of the consultation on coursework in conjunction with the Awarding Bodies and QCA.

 

 

What are the benefits of exams and coursework? How should they work together? What should the balance be?

 

Students need to be capable of undertaking independent research and study. Coursework, with varying levels of teacher intervention and assistance, is one of the best ways of ensuring that this can be undertaken. This is recognised and, as part of the Diplomas, an Extended Project is viewed as an essential element. This is entirely right.

 

To be so fearful of the dangers of plagiarism and the internet would be to deny both teachers and students a vital part of the educational experience. The balance between coursework, assessed coursework and terminal examination will, quite rightly, vary from subject discipline to subject discipline.

 

 

Will the ways in which the new 14-19 Diplomas are to be assessed impact on other qualifications such as GCSE?

 

The new 14-19 Diplomas offer the opportunity for a radical and imaginative approach to assessment. Whether or not this opportunity will be taken remains to be seen.

 

The Extended Project, modular study, "when ready" testing and e-assessment are all aspects which will have implications for other qualifications.

 

However, it would be a mistake to regard the Diplomas as a completely new departure from conventional assessment. There have been for many years, innovative and varied forms of assessment in existing GCSE and A levels and it is hoped that the knowledge and experience of these can be a solid foundation for summative assessment in the future.

 

It is ironic that, as we remove the GCSE coursework from many of the subjects, we are seeking ways of assessing and evaluating Extended Projects at level 2. One might ask, just what are the key differences between these two types of assessment?

 

 

Is holding formal summative tests at 16, 17 and 18 imposing too great a burden on students? If so, what changes should be made?

 

Until the formal leaving age is accepted as 18, it will be necessary to have some form of summative testing and qualification at age 16. GCSEs, Level 1 and 2 Diplomas and other suitable qualifications (which may include i-GCSEs) will need to remain until it becomes the norm for all students to proceed to education and training post 16. The Tomlinson report offered a widely respected and viable alternative but when this was rejected, the educational world had to return to ensuring that the current system was as effective as possible.

 

It will remain necessary to have a summative examination so that a reliable, standardised award may be given at the end of a level 1, 2 or 3 course.

 

There are some subjects where there have been too many, too complex modules but these are the subject of further consultation. The question of re-takes is also under review. It is this which places too great a burden on students and takes them away from study and the course to focus on excessive examination. Generally, the existing system is fit for purpose.

 


To what extent is frequent, modular assessment altering both the scope of teaching and the style of teaching?

 

Frequent modular assessment is not new. In the early days of GCSE Mode 3, this became an excellent method of ensuring ongoing motivation for students for whom a terminal examination and traditional methods was not attractive.

 

The new Diplomas will contain considerable elements of modularisation and it is anticipated that these individual elements will have the possibility of being counted for different awards at different levels and in different combinations. The Minerva software, currently being developed, is intended to be the basis for the management of this new system.

 

Teachers have welcomed the moves towards modularisation because of the positive benefits in terms of motivation, and because students can achieve credit for key aspects of the course in spite of finding some parts of the final qualification too challenging or inappropriate.

 

If anything will assist the reintegration of some of the NEETs (young people not in education, employment or training) it will be the further, suitable development of modular, component assessment within the new vocational diplomas.

 

 

How does the national assessment system interact with university entrance? What are the implications for a national system of testing and assessment from universities setting individual entrance tests?

Universities have been worried about the rise in the number of students who achieve grade A in the A levels. They argue that this had made it more difficult to select the truly high achievers. Making the actual points level detail available to universities should have gone some way towards indicating which of the students are the highest achievers.

 

Whether or not it will be possible to introduce PQA (post qualification application) will depend on negotiations between Awarding Bodies, schools and universities on the question of timescales. If the universities can move their start dates back, it may be possible to complete the A level assessment before they make the firm offers. Moves to bring forward the A level results dates and curtailing the marking period for the Awarding Bodies will also assist with this.

 

It is to be hoped that universities will accept and welcome the new Diplomas. The Secretary of State for Education has urged them to join with the rest of the educational world in giving the new qualifications a fair and successful start. Some universities, however, will inevitably seek to develop their own admissions criteria and we must not arrest the new developments to pander to their views.

 

Far more worrying must be the trend of the independent schools to turn to alternatives such as the i-GCSEs and the International Baccalaureate. It will be essential that QCA and educational organisations work together to ensure that we have a consistent, coherent system of examinations and qualifications at the end of Key Stage 4 and at the end of compulsory schooling.

 

May 2007