2 Every Child a Reader: Reading Recovery
Q
1 The policy
9. The former Prime Minister Tony Blair famously
told the Labour Party conference in 1996 that "education,
education, education" were his three main priorities for
government. The National Strategies,[4]
developed in 1998 and expanded in 2005-06, advocate three Waves
of Provision for addressing the range of educational needs in
schools:
Wave 1: Quality First Teaching. The majority
of children achieve well through high quality classroom teaching.
When children are being taught to read, Quality First Teaching
provides high quality, systematic phonic work as part of a broad
and rich curriculum that engages children in a range of activities
and experiences to develop their speaking and listening skills
and phonological awareness.
Wave 2: Small group and one to one interventions.
Some children require additional support to achieve well. This
can often be provided through small group, time limited intervention
programmes delivered by a member of the school's classroom based
support team that will advance children's progress and help them
achieve in line with their peers.
Wave 3: Intensive support. This
is for those children who require the personalised approach of
a programme that is tailored to their specific, often severe,
difficulties. It is usually taught as a one to one programme by
a teacher or a member of the support staff who has undertaken
some additional training for teaching children with reading difficulties.[5]
10. When in 2007 the Government formed the Department
for Children, Schools and Families (DCSF), one of its first acts
was to draw up The Children's Plan, which set out a bold
vision for ensuring that "no child or young person is left
to fall behind".[6]
One of the flagship elements of this promise was the Every Child
programmesEvery Child a Reader, Every Child a Writer and
Every Child Countswhich provide targeted support, including
one-to-one tuition, for those children who in their early school
years have difficulties with reading, writing and numeracy.
EVERY CHILD A READER AND READING RECOVERY
11. Every Child a Reader (ECaR) is a programme of
work aimed to help poor readers catch up with their peers. It
consists of a programme of Reading Recovery (a type of one-to-one
teaching intervention) for the lowest achieving 5% and less intensive
intervention for the next lowest 15% delivered by teaching assistants
and volunteers supported by the school's Reading Recovery teacher.[7]
The programme was launched as a three-year pilot in 2005, although
it commenced national rollout only a year later.[8]
The national rollout is being managed by the National Strategies,
which are professional development programmes for early years,
primary and secondary school teachers, practitioners and managers.[9]
12. Reading Recovery, the bedrock of the ECaR Wave
3 programme, was developed by the late Marie Clay in the 1970s
in New Zealand, where approximately two thirds of schools use
it. It has also been widely adopted in Australia and the USA,
but less so in the UK. Reading Recovery comprises 12-20 weeks
of intensive, one-to-one, daily tuition by specially trained teachers.
It is designed for year 1 and 2 children.[10]
Our expectations of the evidence
base
13. The effectiveness of the Government's policy
for ensuring that no children are "left behind" rests
on a number of factors. We looked at two: the Government's decision
to prioritise literacy in education, particularly the decision
to focus on tackling literacy interventions early in school; and
the Government's decision to use Reading Recovery.
THE VALUE OF EARLY INTERVENTION
14. An obvious initial question is whether early
interventions are worthwhile. There are two parts to this. First,
whether early interventions are more effective than later interventions
in terms of how well an individual will read into adulthood. Second,
whether the cost of intervention is, from a societal point of
view, worthwhile. Neither are easy to calculate.
15. The first would, for the best evidence, involve
longitudinal studies of children who variously receive or do not
receive reading interventions at differing times during their
education. However, this would be exceedingly difficult to do
and certainly unethical since it would involve leaving some children
with known reading difficulties to fend for themselves rather
than receive the same level of help as their peers. What researchers
are left with is the far-from-ideal situation of having to compare
those interventions that are made early in a child's education
with those that are made later. (Some interventions, like Reading
Recovery, are aimed specifically at very early schooling, years
1 and 2, some are aimed at later years and some are flexible across
a wide age range.) This situation is unsatisfactory because the
confounding factors that can be controlled in a snap-shot trial
are more problematic over a long period of time. For example,
some children who receive later literacy interventions because
"they did not get it the first time"[11]
may not have received an intervention before, others may have
received earlier interventions that did not work; there will be
differences in the amount of help children get from their parents
and their peers; the underlying reasons for the difficulty may
be hard to control and may have changed over time; and underlying
difficulties can be cognitive or pedagogical or both.
16. Whether or not early interventions are better
than later interventions is only part of the story. The bigger
question is whether ensuring that every single child can read
is really necessary. As a society, we choose to prioritise reading,
but teaching children to readall childrenis expensive
and teaching the very worst readers is very expensive indeed.
Reading Recovery, for example, costs approximately £2,600
per child.[12] Is it
worth it? To answer this question, we need to know the cost of
doing nothing: it will either be more expensive to society, about
the same or less. We did not put preconditions or stringent expectations
on the nature of the research we were looking for in this area.
Calculating societal costs is complicated and inevitably involves
a large dose of estimation and generalisation. We discuss the
quality of this evidence in paragraphs 26-29.
THE QUALITY OF EVIDENCE AND BENCHMARKING
17. Educationalists put literacy interventions to
the test to see whether they work or not, and if they do work,
how effective they are. Here are the basics:
- In order to test whether a
reading intervention has worked, one needs to know the starting
reading standard of a child (pre-test assessment) and the finishing
standard of a child (post-test).
- The gold standard research model is a randomised
controlled trial (RCT). It is randomised so that decisions about
who receives a reading intervention are taken out of the hands
of the person who will be doing the teaching to avoid bias. It
is controlled in the sense that those children who receive an
intervention are compared against a matched group of children
who do not.
- The silver standard is a quasi-experiment in
which randomisation does not take place, but groups of children
in the experimental group are matched against control groups of
children who do not receive an intervention.
18. Of course, this is a simplification. One does
not necessarily need to compare children who receive an intervention
with those who do not. In the case of reading interventions it
is preferable to compare different reading interventions directly
against each other. There are three reasons for this:
a) it is well established that one-to-one support
for a child yields better results than normal classroom teaching;[13]
b) what is more pertinent is how the variety
of reading interventions compare in terms of effectiveness; and
c) there are ethical problems with assigning
children who have a known reading difficulty into an experimental
category where they receive no extra help at all.
19. All of this is overlooking a very obvious question:
how does one measure improvements in literacy? Dr Chris Singleton
extols the virtues of standard (or standardised) scores on the
grounds that "they are age-independent and test-independent
and enable a proper comparison between different groups and different
studies".[14] Additionally,
standardised scores are designed to make it possible to carry
out what statisticians call 'parametric' statistics, which is
a way of saying that the interactions between variables can be
quantified and statistical significance calculated. In other words,
when standardised tests are used it is possible to bring together
numerous studies to improve the power of the data and to control
for a variety of complicating variables.
20. The alternative to standardised scores are 'reading
ages', which are metrics for how well normally progressing readers
will develop. A normal reader aged 6 years 2 months will have
a reading age of 6 years 2 months. This measure is less statistically
robust than standardised scores.
21. The size of the difference between literacy skills
at the start of an intervention and at the end of an intervention,
or years later, can be calculated in two ways. The best is to
calculate the 'effect size' between two standardised scores (where
the mean is usually 100 and the standard deviation 15). By measuring
the difference between two scores and dividing by the standard
deviation, researchers can calculate statistically robust measures
of success (or otherwise).
22. An alternative, less rigorous approach is to
calculate the 'ratio gain'. In this measure, the reading (or spelling)
age of a child during a set time frame is expressed as a ratio
of that time frame. For example, a normal child aged 5 years 2
months will, three months later, have a reading age of 5 years
5 months. This is a ratio gain of 1.0. A child who progresses
only three months in a year will have a smaller ratio gain, 0.25.
What we are looking for is children who have fallen behind their
peers but who receive extra help and improve quickly. Professor
Greg Brooks, Research Director of the Sheffield arm of the National
Research and Development Centre at the University of Sheffield,
has suggested that ratio gains of 1.4 or higher are of "educational
significance".[15]
Ratio gains are less satisfactory than effect sizes. They do not
offer the same statistical rigour and they are particularly problematic
for research on the lowest attaining readers because, for example,
rapid improvement in reading age from a very low base will yield
high ratio scores even if the children are still very poor readers.
The Evidence Check
The value of intervention
23. The Government's position is that the earlier
an intervention can be made for a struggling reader, the better.
We asked our panel of expert witnesses[16]
about the evidence base for early interventions and they were
in unanimous agreement:
Chairman: So, in your view, the evidence
is there to say early intervention works?
Professor Slavin: I think without
any doubt.
Chairman: Jean, do you support that?
Jean Gross: Yes, I do. I support
that completely.
Chairman: Professor Brooks?
Professor Brooks: Yes, it is better
if they come in early, when children are first identified as struggling,
but there are also programmes for those who are picked up rather
later.[17]
24. However, it is difficult to test directly and
we have therefore had to rely on comparative evidence. We turned
to Professor Greg Brooks' extensive review, What works for
pupils with literacy difficulties?, which was commissioned
by the Government.[18]
The first edition came out in 1998, with subsequent editions in
2002 and 2007. The tables of effect sizes and ratio gains for
all of the different interventions that he reviewed provide interesting
reading. They suggest that on average the earliest interventions
(years 1 and 2) are beneficial and provide approximately the same
benefits, give or take a few effect size or ratio gain fractions,
as later interventions (years 3 and beyond).[19]
Both the very earliest interventions and later interventions are,
on average, of "useful impact", to use Professor Brooks'
language.[20]
25. It is clear that literacy interventions, both
at the very earliest stages of formal education and later on,
can be effective. This is all the evidence we need, since it logically
follows that the earlier struggling readers are identified, the
longer the time educationalists have to tackle the their problems
and teach them to read. (It is possible that there are also additional
benefits of bringing poor readers back to parity with their peers
as early as possiblefor example, boosting the self-esteem
of the individuals and enabling them to take part more fully in
the wider curriculumalthough we have not been directed
to evidence that supports these suppositions.) The Government's
policy that literacy interventions should take place early on
in formal education is in line with the evidence.
26. Another angle on the same topic, is to ask whether
these literacy interventions are worthwhile from a societal point
of view. The evidence we were referred to in order to answer this
problem was research carried out by the KPMG Foundation[21]
and the Every Child a Chance Trust,[22]
although it is fair to say that this cannot be described as wholly
independent given its interest in the work. These studies provided
estimates of the cost of literacy difficulties to society. Some
of these costs are simple to calculate because they are a direct
consequence of literacy difficulties; for example, adult literacy
classes. But the majority of costs require a subtle way of making
estimations because the relationship between literacy difficulties
and the cost-factors are complicated; for example, truancy, unemployment,
teenage pregnancy, depression, obesity and crime.
27. The KPMG Foundation and Every Child a Chance
Trust chose to make these complicated estimates by using frequency
differentials. For example:
10% of women with average literacy levels experience
depression and 36% of women with very low literacy skills experience
depression. The differential frequency is therefore 26%.
Applying this figure of 26% to the 12,300 females
in the year group population of 38,700 gives 3,198 more women
with very poor literacy skills who can be expected to experience
depression than would be expected if they were average readers.
79% of these (2,526 women) can be assumed to
escape depression because early literacy intervention has successfully
lifted them out of very low literacy levels.
We assumed that the differential rate of depression
applies for adult life, and cannot be limited to a particular
age range, so those with poorer literacy levels will be more likely
to experience depression whatever their age.
The costs of depression were identified as £194
(inflated to 2008 prices) per year per depressed person.
This cost was then applied throughout the adult
lives (ages 18-37, and over a lifetime) of the identified 2,526
subjects to obtain total cost savings for depression for women.[23]
28. These kinds of estimates are fraught with assumptions
and generalisations that will undoubtedly lead to miscalculation.
This particular methodology is likely to offer an overestimation
of the cost, because its simple calculation of a differential
frequency on the basis of two variablesnormal literacy
skills versus low literacy skillsmeans that a large number
of co-occurring variables have not been considered. To be fair,
it would be very difficult to control for all these variables
properly and both reports are proportionately littered with cautions
about the data and methodologies. For example, one of the most
difficult costs to calculate was crime. The Every Child a Chance
Trust notes that there is a correlation between crime and literacy
difficulties, but warns:
[The d]ifferentials [are] based on empirical
data on [the percentage] of children with literacy difficulties
who also have behaviour problems (and empirical data about costs
of behaviour problems to criminal justice system) but [there are]
no controls for other factors (such as general cognitive ability,
social class) that might explain the link.[24]
29. With all the appropriate caveats and warnings,
the two reports make estimates of the long term costs of literacy
difficulties to society. The most recent Every Child a Chance
Trust report, estimates that literacy difficulties cost England
in the region of £2.5 billion every year (see Table 1).
30. The authors have suggested that these costs are
"conservative"[25]
because they have not included items such as soft costs (like
illness and loss of income) or "social services costs, social
housing costs, the costs of generally poorer health, the costs
of substance abuse over the age of 18, the costs of women's involvement
in the criminal justice system and lost tax on pension income".[26]
31. Although in this section we have discussed the
value of interventions in general, it is worth noting that the
reports were commissioned explicitly to ascertain whether the
Every Child a Reader programme represented value for money. The
conclusion of the reports was that it did. The second edition
calculates that for every pound spent on the Every Child a Reader
programme, there is an overall return on investment of between
£11 and £17.[27]
32. Estimating the cost of literacy difficulties
is clearly not easy to do, but we believe that that should not
stop researchers from making the best estimates they can. We were
impressed by the KPMG Foundation and Every Child a Reader Trust's
efforts. While the figures quoted are unlikely to be correct,
they clearly show that there is a substantial cost associated
with literacy difficulties. Spending money on literacy interventions
is a cost effective thing to do. The Government's position
that early literacy interventions are an investment that saves
money in the long run is evidence-based.
Table 1. The long term costs of literacy difficulties.[28]
ALTERNATIVES TO READING RECOVERY
33. We are satisfied that the Government is right
to support literacy interventions, but the question remains as
to which literacy interventions are most appropriate. There are
many choices. Professor Brooks reviewed the evidence in 1998,
2002 and 2007 for the Government. In the 2007 edition, he reviewed
large and small group teaching, one-to-one teaching, paired reading,
and teaching using information and communication technologies
(ICT). He also analysed the impact of teaching phonological skills,
comprehension and self-esteem. In all, he reviewed evidence for
48 different kinds of reading intervention.[29]
34. Given the large range of options, we asked the
Government what other kinds of interventions were considered before
the decision was taken to make Reading Recovery the bedrock of
the Every Child a Reader programme. Jennifer Chew, a retired English
teacher, summarised the Government's response to the Committee's
question about the cost effectiveness of different literacy interventions:
Other literacy interventions are available which
are cheaper than Reading Recovery, which are more consistent with
Wave 1 teaching, and which arguably produce better results [
]
If the government did consider alternatives before providing
funding for Reading Recovery, it should be able to say which the
alternatives were, how they were investigated, and what evidence
led to the conclusion that they were less cost-effective than
Reading Recovery.[30]
35. In oral evidence, we asked Ms Johnson and Ms
Willis what alternative interventions were considered. Ms Johnson
told us she did not know the answer,[31]
and Ms Willis pointed us to Professor Brooks 2002 review.[32]
We asked for a supplementary memorandum from the Government on
this point. The Government responded:
Interventions other than Reading Recovery
we considered
The choice of Reading Recovery as the core intervention
of the ECAR programme was made during the pilot phase led by the
Every Child A Chance Trust. The Department saw no reason to change
this when taking on the programme for national roll-out.[33]
36. In other words, the Government did not formally
consider any other kind of intervention. We pressed Ms Willis
on this point who reassured us that: "it is my strong interest
and my purpose in the Department to ensure that policy is based
on sound evidence".[34]
She continued:
We have commissioned an independent evaluation
just recently from IFS [Institute for Fiscal Studies], the University
of Nottingham, and NatCen to look very carefully at how the programme
is implemented, to undertake a cost/benefit analysis and to look
at the value for money of the way in which ECAR is being implemented.
[
] What we will need to do [
] is compare that with
other emerging evidence from alternative interventions.[35]
37. Ms Willis is right to acknowledge the need
to compare Reading Recovery with alternative interventions. We
conclude that, whilst there was evidence to support early intervention,
the Government should not have reached the point of a national
roll-out of Reading Recovery without making cost-benefit comparisons
with other interventions.
THE QUALITY OF THE EVIDENCE
38. We have already discussed what evidence we were
looking for in dealing with this problem: randomised controlled
trials that use standardised test scores. We were alarmed to discover
that both are lacking in the UK literacy research base. On data
quality, Sir Jim Rose noted:
Most of the US studies included in Dr Chris Singleton's
review[36] have reported
standard scores; unfortunately few of the UK studies have done
so, sometimes because the tests used do not provide tables of
norms in standard score form.[37]
39. Professor Brooks, in the 2007 version of What
Works, summarised some of the problems with the data used
in UK literacy intervention trials:
Three particular problems arose from the tests
used in the 121 studies. Firstly, some of the tests were old even
when used in the relevant studies.
Secondly, most of the tests provided only reading/spelling
age data and not standardised scores. Though apparently easier
to interpret, reading and spelling ages are statistically unsatisfactory
[
] Reading and spelling age data do allow the calculation
of the ratio gainbut this is in itself not a very useful
statistic, especially for low-attaining groups. [
]
Thirdly, for many of the tests used it was impossible
to calculate effect sizes, which are statistically much more satisfactory
than ratio gains. If a standardised test is used, an effect size
can be calculated even in the absence of an explicit comparison
group; but if a non-standardised test is used then an effect size
can be calculated only if comparison group data, including the
standard deviation, are reported.[38]
40. We are concerned by the low quality of data
collection in UK trials on literacy interventions. Government-funded
trials should seek the best data so as to make the results as
powerful as possible. Running trials that do not collect the best
data is a failure both in terms of the methodological approach,
but also value for money.
41. Professor Brooks also commented on the different
types of trial model used to assess literacy interventions. He
noted that there are very few randomised trials "and some
of those were so small as to be hardly worth carrying out".[39]
There were a few quasi-experiments with matched groups, but the
bulk were either of unmatched groups or just one-group studies.
Research
design of 121 studies analysed by Professor
Brooks in the 2007 version of his review.[40]
Research design |
n
|
randomised controlled trial
| 9 |
matched groups quasi-experiment
| 21 |
unmatched groups pre-test/post-test study
| 18 |
one-group pre-test/post-test study
| 73 |
42. Given the Government's support for Reading Recovery
we were alarmed to discover that none of the nine randomised controlled
trials were on Reading Recovery.[41]
We were reassured by Ms Willis that there have been RCTs of Reading
Recovery in the United States:
Reading Recovery has a strong evidence base.
It has been reviewed for example by the What Works Clearinghouse
in the States and includes a number of randomised controlled trials.[42]
43. However, when we sought these trials, we discovered
that they took place in 1988,[43]
1994,[44] 1997,[45]
and 2005.[46] More than
15 years separate the first trial from the most recent. This is
important because Reading Recovery has evolved considerably over
time: between the '90s and mid-'00s, Professor Brooks observed,
"Reading Recovery changed considerably, to reflect international
research, and now includes a large amount of phonological awareness
and phonics".[47]
In other words, three of the four RCT trials were on a kind of
Reading Recovery that is not used in the UK.
44. Ms Willis stated that the Government has accepted
the United States What Works Clearinghouse report as supporting
the effectiveness of the Reading Recovery programme, but was unaware
of evidence from Mary Reynolds et al in the International
Journal of Disability, Development and Education, which contradicts
this work and concludes other methods would probably work better.[48]
45. The Government should be careful when selecting
evidence in support of educational programmes that have changed
over time. Reading Recovery today differs from its 1980s and 1990s
ancestors. Evidence used to support a national rollout of Reading
Recovery should be up-to-date and relevant to the UK. The Government's
decision to roll out Reading Recovery nationally is not based
on the best quality, sound evidence.
RANDOMISED CONTROLLED TRIALS
46. To recap, the Government is rolling out Reading
Recovery nationally without having subjected it to a randomised
controlled trial in the UK school system. The Government had an
opportunity to commission a randomised controlled trial of Reading
Recovery when it started the three-year pilot for Every Child
a Reader in 2005. It would have been particularly sensible in
the light of calls from leading experts that RCTs were needed.
In 2001, Carole Torgerson and David Torgerson from the Institute
for Effective Education, University of York, made a general call
for more RCTs in educational research:
Educational researchers have largely abandoned
the methodology they helped to pioneer. This gold-standard methodology
should be more widely used as it is an appropriate and robust
research technique. Without subjecting curriculum innovations
to a RCT then potentially harmful educational initiatives could
be visited upon the nation's children.[49]
And Professor Brooks made a specific call for an
RCT on Reading Recovery when he was a member of the advisory group
to the Every Child a Reader study.[50]
47. We asked Ms Willis, DCSF's Chief Scientific Adviser,
why an RCT had not been undertaken. She told us that RCTs are
"the gold standard in terms of research" but that "I
do not believe it is always essential to have a randomised controlled
trial".[51] She
noted a number of problems with running an RCT:
- there is a "logistical
issue around picking schools and then randomly allocating them
and then having [
] teacher leaders in local authorities
[
] going out to train up the teachers in individual schools";[52]
- it is difficult to get schools to agree to take
part in research generally and randomised controlled trials in
particular;[53] and
- DCSF has a "limited research budget"
and "matched comparison groups can be undertaken at less
cost and deliver very similar quality results".[54]
48. She added, however, that:
I should add the Department is not against RCTs.
We are running a number. We have one underway looking at foster
care and we have one underway looking at how to reduce teenage
pregnancy, a very interesting randomised controlled trial there,
although that has taken over a year just to try and get the methodology
right and actually convince people to take part. Of course we
have the Every Child Counts randomised controlled trial.[55]
49. We cannot understand why these issues warranted
the extra time, effort and money, and a crucially important issue
such as literacy did not. We put it to Ms Willis that the Government
should have criteria for determining whether or not a research
project requires an RCT,[56]
for example, in the Magenta Book, which provides guidance on social
science research and policy evaluation.[57]
We were pleased that Ms Willis found our suggestion "interesting".[58]
We recommend that the Government should draw up a set of criteria
on which it decides whether a research project should be a randomised
controlled trial.
50. But still the question remains as to whether
a randomised trial was necessary. Jean Gross, Director of the
Every Child a Chance Trust, thinks it was: "Looking back,
I wish at the time we had been able to find money to do a randomised
control trial".[59]
And as we already mentioned, Professor Brooks thought so too.[60]
We agree. The arguments put forward by Ms Willis for not doing
an RCT do not stand up to scrutiny.
51. The argument that it is difficult to persuade
schools to engage in research is made on the basis that "Some
local authorities or schools perceive it as unfair that some of
their pupils will be getting some sort of intervention that others
are not".[61] This
is focussing on a problem that should not exist. We have already
said that because it is so clear that one-to-one interventions
are better than doing nothing it is not worth conducting trials
that compare one-to-one interventions against no intervention.
Rather, it would be more beneficial to conduct randomised trials
that compared two or more kinds of intervention, in order to determine
which was better. This would avoid the ethical problem of some
children not receiving help when they need it and should make
it easier to persuade schools to take part in research (provided
the interventions offered reasonable prospect of improvement).
52. The logistical difficulties of conducting randomised
trials for literacy interventions are no different than for any
other kind of educational randomised trial; they are not prohibitive
and accordingly neither are the costs (approximately £250,000).[62]
As we have already discussed, early literacy interventions appear
to be highly cost effective. In our view, proven cost effective
measures that have the potential to be even more cost effective
should be subjected to the highest quality research, not marginally
cheaper substitutes. The Government spent a lot of money setting
up the pilot programme for Every Child a Reader, which failed
to gather the highest quality data or use a gold-standard trial
design.
53. Finally, Professor Brooks raised another problem
with running randomised controlled trials: time.[63]
Although running an RCT does not take longer than any other kind
of well designed trial, setting one up can take a long time. Ms
Willis told us that it has taken over a year to set up an RCT
on reducing teenage pregnancies.[64]
But this is not a good enough excuse: the pilots for Every Child
a Reader were due to run for three years from 2005. That would
have been more than enough time to set up and run a large randomised
controlled trial on a more than just Reading Recovery.
54. We conclude that a randomised controlled trial
of Reading Recovery was both feasible and necessary.
55. But we are where we are. The Government is rolling
out Every Child a Reader nationally. However, we do not consider
this a reason to abandon further research. Reading Recovery is
an expensive component of the ECaR programme and if there is an
equally effective but cheaper alternative, it should be sought
out. Research in this area should be ongoing, not stop when the
Government decides to roll out a particular programme. We recommend
that the Government identify some promising alternatives to Reading
Recovery and commission a large randomised controlled trial to
identify the most effective and cost-effective early literacy
intervention.
AN EMPHASIS ON PHONICS?
56. We have one final concern. The teaching of systematic
phonics (see box below) became a requirement by National Curriculum
Order in 2007.[65] But,
according to Dr Singleton, Senior Research Fellow at the University
of Hull, Reading Recovery is "a pedagogical sibling to the
'whole-language' theory of reading, which maintains that reading
skills arise naturally out of frequent encounters with interesting
and absorbing reading materials".[66]
This view of learning has been "increasingly contested"
said Dr Singleton in his 2009 review of the evidence,[67]
and over time the accumulating evidence on the benefits of systematic
phonics work has influenced the way in which Reading Recovery
is delivered.[68] Despite
this trend, Dr Singleton concludes that:
Phonics
Sir Jim Rose, in his 2006 report on the teaching of early reading, quotes Linnea Ehri:
Phonics is a method of instruction that teaches students correspondences between graphemes in written language and phonemes in spoken language and how to use these correspondences to read and spell words. Phonics instruction is systematic when all the major grapheme-phoneme correspondences are taught and they are covered in a clearly defined sequence.[69]
Synthetic phonics: readers are taught to break down words into their constituent parts and work out how to pronounce words for themselves (e.g., 'shrink' would be broken down into 'sh', 'r', 'i', 'n' and 'k' and blended together).
Analytical phonics: readers are taught consonant blends as units (e.g., 'shr' is taught as a whole unit) and analyse sound-symbol relationships but do not blend the words together.
Embedded phonics: readers are taught phonics as part of a whole-word approach to reading, not as separate lessons.
He went on to recommend, and the Government accepted, that "the case for systematic phonic work is overwhelming and much strengthened by a synthetic approach".[70]
|
[N]either Reading Recovery as part of [Every Child a Reader]
nor Reading Recovery in the UK more generally provides systematic
phonics instruction. [
D]espite these reported changes to
the reading recovery programme, a fundamental conflict still remains
between its approach and the revised National Literacy strategy,
in which systematic teaching of phonics is now a central
feature.[71]
57. We received evidence that supported this position. Elizabeth
Nonweiler, Teach to Read, highlighted Dr Sue Bodman's method of
delivering a Reading Recovery lesson:
Bodman (2007)[[72]]
describes a Reading Recovery lesson, which, she claims, 'links
the teaching actions to the ideas of synthetic phonics': After
reading a book, a child observes his teacher reading the word
'can' 'whilst demonstrating a left to right hand sweep'. Then
he builds 'can' with magnetic letters and reads it himself. It
is clear that the child was asked to read a text before acquiring
the phonic knowledge and skills involved, and to read a word after
being told the pronunciation. With synthetic phonics children
read texts after learning the phonic knowledge and skills involved
and they are not told the pronunciation of a new word before being
asked to read it.[73]
58. We put it to the Minister, Ms Johnson, that the Government's
use of Reading Recovery to help the poorest readers does not square
with its support of the use of systematic phonics, particularly
for children diagnosed with dyslexia. She told us:
Perhaps I ought to just say that the Reading Recovery that
we are looking at, in terms of the evidence of Reading Recovery
over the last 20 or 30 years, has changed and obviously phonics
is now much more embedded within Reading Recovery than it was
in the earlier examples of Reading Recovery.[74]
59. Phonics is "embedded" in the modern Reading Recovery,
but systematic, synthetic phonics, as we have discussed above,
is not. Teaching children to read is one of the most important
things the State does. The Government has accepted Sir Jim Rose's
recommendation that systematic phonics should be at the heart
of the Government's strategy for teaching children to read. This
is in conflict with the continuing practice of word memorisation
and other teaching practices from the 'whole language theory of
reading' used particularly in Wave 3 Reading Recovery. The Government
should vigorously review these practices with the objective of
ensuring that Reading Recovery complies with its policy.
4 www.nationalstrategies.org.uk Back
5
Sir Jim Rose, Identifying and Teaching Children and Young People
with Dyslexia and Literacy Difficulties, 2009, p 60 Back
6
DCSF, The Children's Plan: Building brighter futures, 2007,
p 4 Back
7
Ev 3 [Every Child a Chance Trust], para 3.10 Back
8
Policy Exchange, Rising Marks, Falling Standards, 2009,
p 37 Back
9
www.literacytrust.org.uk/database/readingrecovery.html. According
to www.nationalstrategies.org.uk: "Since 1998, the National
Strategies have taken the form of a professional development programme
providing training and targeted support to teachers (and increasingly
practitioners). The training and support programmes have been
supplemented by materials and resources provided free of charge
and widely accepted as being of high quality. During 2005 and
2006, the National Strategies took on an expanded brief, which
increased the age range covered and the way in which system-wide
challenge and support is provided through the Strategies. The
expanded functions include: Local Authority whole school improvement;
School Improvement Partners; Behaviour & Attendance; Early
Years from 0 to 5 years; 14-19." Back
10
Chris Singleton, Intervention for Dyslexia, 2009, p 95 Back
11
Q 20 [Professor Greg Brooks] Back
12
Ev 6 [Every Child a Chance Trust], para 6.6 Back
13
Greg Brooks, What works for pupils with literacy difficulties?,
DCSF, 2007; Sir Jim Rose, Identifying and Teaching Children
and Young People with Dyslexia and Literacy Difficulties,
2009; Chris Singleton, Intervention for Dyslexia, 2009 Back
14
Chris Singleton, Intervention for Dyslexia, 2009, p 26 Back
15
Greg Brooks, What works for pupils with literacy difficulties?
DCSF, 2007, p 270 Back
16
Professor Bob Slavin, Director of the Institute for Effective
Education, University of York, and Director of the Center for
Research and Reform in Education, Johns Hopkins University; Jean
Gross, Director of the Every Child a Chance Trust; Professor Greg
Brooks, Research Director of the Sheffield arm of the National
Research and Development Centre, University of Sheffield. Back
17
Qq 3-5 Back
18
Greg Brooks, What works for pupils with literacy difficulties?
DCSF, 2007 Back
19
We calculated this by averaging the effect size and ratio gain
in the experimental groups on accuracy for those interventions
that focussed on years 1 and 2 (the early cohort) and years 3
and beyond (the later cohort). We discounted those studies that
assessed both early and later cohorts. The average effect size
for the early interventions was 0.5 (excluding the highest effect
size, 3.5, a clear outlier); for the later interventions it was
0.5. The average ratio gain for the early interventions was 2.5;
for the later interventions it was 2.8 (excluding the highest
ratio gain of 16.1, a clear outlier). This is not a rigorous test-for
example, we did not weight according the quality of the trials-and
is intended to be indicative only. Back
20
Greg Brooks, What works for pupils with literacy difficulties?
DCSF, 2007, p 270 Back
21
KPMG Foundation, The long term costs of literacy difficulties,
2006 Back
22
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009 Back
23
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009, p 21 Back
24
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009, p 46 Back
25
As above Back
26
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009, p 21 Back
27
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009, p 5 Back
28
Every Child a Chance Trust, The long term costs of literacy
difficulties, 2nd edition, 2009, p 6 Back
29
Greg Brooks, What Works for Children with Literacy Difficulties?
The Effectiveness of Intervention Scheme, Department for Education
and Skills, 2002, p 15 Back
30
Ev 67 [Jennifer Chew], para 7 Back
31
Q 184 Back
32
Q 185 Back
33
Ev 54 Back
34
Q 119 Back
35
Qq 119-120 Back
36
Chris Singleton, Intervention for Dyslexia, 2009 Back
37
Sir Jim Rose, Identifying and Teaching Children and Young People
with Dyslexia and Literacy Difficulties, 2009, p 176 Back
38
Professor Greg Brooks, What works for pupils with literacy
difficulties? Department for Children, Schools and Families,
2007, p 110 Back
39
Q 21 Back
40
Greg Brooks, What Works for Children with Literacy Difficulties?
The Effectiveness of Intervention Scheme, Department for Education
and Skills, 2002, p 124 Back
41
Greg Brooks, What Works for Children with Literacy Difficulties?
The Effectiveness of Intervention Scheme, Department for Education
and Skills, 2002, pp 124-125 Back
42
Q 115; see also Q 171 Back
43
G. S. Pinnell, D. E. DeFord & C. A. Lyons, "Reading Recovery:
Early intervention for at-risk first graders", Educational
Research Service Monograph, (1988) Back
44
G. S. Pinnell, C. A. Lyons, D. E. DeFord, A. S. Bryk & M.
Seltzer, "Comparing instructional models for the literacy
education of high-risk first graders", Reading Research
Quarterly, Vol. 29 (1994), pp 8-39 Back
45
N. Baenen, A. Bernhole, C. Dulaney & K. Banks, "Reading
Recovery: Long-term progress after three cohorts", Journal
of Education for Students Placed at Risk, Vol. 2 (1997), p
161 Back
46
R. M. Schwartz, "Literacy learning of at-risk first-grade
students in the Reading Recovery early intervention", Journal
of Educational Psychology, Vol. 97 (2005), pp 257-267 Back
47
Greg Brooks, What works for pupils with literacy difficulties?
2007, p 74 Back
48
Reynolds, M., & Wheldall, K. (2007). "Reading Recovery
20 years down the track: Looking forward, looking back. International
Journal of Disability, Development and Education", Vol. 54,
pp 199-223 Back
49
C. J. Torgerson & D. J. Torgerson, "The need for randomised
controlled trials in educational research", British Journal
of Educational Studies, Vol. 49 (2001), pp 316-328 Back
50
Q 56 Back
51
Q 134 Back
52
Q 177 Back
53
Qq 163, 177 Back
54
Q 134 Back
55
As above Back
56
Q 168 Back
57
Ev 56; Government Social Research Unit, The Magenta Book: guidance
notes for policy evaluation and analysis, 2007 Back
58
Q 168 Back
59
Q 24 Back
60
Q 56 Back
61
Q 163 Back
62
Q 24 Back
63
Q 54 Back
64
Q 134 Back
65
Ev 55 Back
66
Chris Singleton, Intervention for Dyslexia, 2009, p 96 Back
67
Chris Singleton, Intervention for Dyslexia, 2009, p 97 Back
68
As above Back
69
Sir Jim Rose, Independent review of the teaching of early reading,
Department for Education and Skills, 2006, p 17 Back
70
The Rose Report, p 20 Back
71
Chris Singleton, Intervention for Dyslexia, 2009, p 99 Back
72
S. Bodman, "Skilful teaching of phonics in Reading Recovery",
The Running Record, Issue 12 (2007), pp 3-5 Back
73
Ev 74 [Elizabeth Nonweiler], para 5 Back
74
Q 114 Back
|