Chapter 2:
Problems of the measurement culture, and attempts to solve them
28. We believe that the Government has laudable aspirations
for its public service targets and performance tables. Yet, despite
this, the Government's policy was unpopular with many of our witnesses.
Even where they agreed in principle with targets (which almost
all said they did), they expressed serious reservations about
their operation in practice. Allegations of cheating, perverse
consequences and distortions in pursuit of targets, along with
unfair pressure on professionals, continue to appear. League tables
are often seen as untrustworthy and misleading.
29. This Chapter looks at some of the evidence for
this unpopularity, and tries to analyse the reasons. Below, we
take each of the Government's aspirations in turn, assessing to
what extent they have failed to meet them.
Five failings
Lack
of clarity about what the Government is trying to achieve; failure
to produce equity
30. Much of our evidence suggested that the measurement
culture had failed to give a clear enough statement of the Government's
aims and priorities. While there was much exhortation and a wide
range of targets, there was not a sufficiently coherent lead.
31. The recent Cabinet Office review of Executive
Agencies observed that "the link between Public Service Agreement
targets and agency key targets is
often unclear"[21]
and "it is often difficult for agencies to see any real link
between the services they deliver and the needs of the Department".[22]
32. The idea of relying on national targets to promote
greater equity also raises a number of difficult issues. A national
target can be met in more than one way, and some of them promote
greater equity while others do not. For example, a 10% improvement
in services can be achieved if all providers improve equally.
Alternatively it can also be achieved if some units do disproportionately
well while others fail. If top performers improve most, this will
widen the gap between citizens in different parts of the country,
while if poor performing agencies do best, this will not only
raise the average but also reduce inequalities. It is important
therefore to be clear about objectives.
Failure
to provide a clear sense of direction and ambition and to help
plan resources. Failure to communicate a clear message to staff
33. We doubt that the current target regime has succeeded
in providing a clear sense of direction and ambition for our public
services. Targets can never be substitutes for a proper and clearly
expressed strategy and set of priorities, and we found that witnesses
identified a significant risk that the target setting process
had subverted this relationship, with targets becoming almost
an end in themselves rather than providing an accurate measure
of progress towards the organisation's goals and objectives. Targets
can be good servants, but they are poor masters.
34. In his evidence to us Sir Michael Bichard recorded
his concern that targets were "almost being presented as
a substitute for business planning, that really all you needed
was a small set of targets, they were in the PSA and you got your
comprehensive spending money and then they were reviewed".[23]
For him the key point was that targets should be "dropping
out of the business plan" and not the other way round. Targets
are no substitute for effective management. Peter Neyroud expressed
a similar concern about the need to link target setting to strategic
direction. He told us that "The linking of targets to a clear
strategic direction and to resource allocation will ensure that
a more limited number of well designed targets would be likely
to have a far greater impact than a plethora of ill considered
targets".[24]
35. Where centrally-imposed targets differ widely
from what local people judge to be sensible aspirations, tensions
can arise, making it difficult to keep a sense of 'direction and
ambition'. Jonathan Harris, formerly Director of Education for
Cornwall, told us during our visit to Bristol of his disagreement
with the DfES over targets for Key Stage 2. He had an admirably
clear idea of what targets can do: "The purpose
of
setting targets is to motivate the staff to perform better".[25]
He suggested, like so many of our witnesses, that targets should
be produced at least partly on the basis of local knowledge, "If
you think, 'I can just about make it', you have a stretching target
and I think services improve". Instead of this, Cornwall
was given what teachers, administrators and local politicians
all felt was an unrealistic target, based upon national figures.
Eventually, after tough negotiations, the two sides simply agreed
to disagree. The outcome appeared to be counter-productive. As
Mr Harris put it, "Something imposed from above nationally
which has little relevance to a teacher in a school in the middle
of Bodmin Moor is not necessarily stretching her and it may not
actually achieve improvement".[26]
36. Many top-down targets were condemned by our witnesses.
The Rt Hon Estelle Morris MP, former Secretary of State for Education,
conceded to us that "The biggest problem at the moment is
that the profession feels no ownership of the targets, none whatsoever".[27]
She added that "The key thing is the national target was
set first, that was what caused the problem
but if the target
was set at school level and then you built up you would not have
the problem in making the jigsaw pieces fit the jigsaw".[28]
Sir Michael Bichard said it was "absolutely hopeless to set
a national target and then just tell local delivery units to go
away and achieve those because they have got no idea what that
national target means in terms of their performance, what they
need to do to improve so that the national target is achieved'.[29]
As Lord Browne observed "You cannot impose targets by fiat".[30]
We strongly agree, and we also feel that, at the front line in
the public services, there is still a perception that this is
what is happening.
37. The contrast with the commercial world is clear.
The Audit Commission told us that "the private sector is
comfortable with targets" because, while they are determined
from the top, they are "built on measures which are valid
from the 'bottom-up', for example sales, and generally accepted
as valid".[31]
The same could not be said for much of the public sector in the
UK.
38. The underlying problem seems to be that central
departments often do not understand what life is like for those
who deliver services. This was the view of James Strachan, Chairman
of the Audit Commission, who identified one way in which physical
and organisational geography can defeat attempts to learn from
experience:
"What concerns me is that there are two very
severe skills shortages, one at the centre and one locally. At
the centre there is still a real paucity at the senior level of
people who are involved in the setting of targets, a lack of real
world delivery experience and this is shown time and time again.
And, secondly, related to that, is the fact that if you have a
very controlling centre you have a tendency not only to set the
'what', but also to get far too much involved in the 'how'. The
second point is at a local level. At the local level often the
experience of real world delivery is there, but what is not there
is a real understanding of both the strengths but also the limitations
of these tools and, of course, we see far too often that the mechanism
which is purely a means, becomes an end in itself. It is not a
learning tool, it is the actual object of all activity. That is
very dangerous".[32]
39. Much of our evidence bears out Mr Strachan's
contention. Jan Filochowski, Chief Executive of Bath Royal United
Hospital, criticised ministers for imagining that it was easy
to replicate best practice all over the country:
"I think maybe ministers sometimes feel that
because one place does it right everyone can do it right, and
it really is not as simple as that
When we started to be
successful and people said, 'Why don't you give a seminar and
tell people how to do it'? I said, 'No, we have got to build up
a whole battery of skills, it is a year long task, we have got
to think about how you change the approach, it is a major, major
task'. That is why it is not so transferable".[33]
40. Without allowing for a professional veto on change
or accountability, there is a need to take proper account of the
existence and expertise of professional groups. We had a great
deal of evidence from medical colleges, headteachers' associations
and others concerned with professional standards, much of it expressing
concern that targets failed to take account of their special expertise
and judgement. Many (especially in the health service) felt undermined
by targets, with the late 1990s obsession with cutting waiting
lists frequently cited as the most damaging example.[34]
41. The Governor of Durham Prison and President of
the Governors' Association, Mike Newell, made clear the importance
of getting the professionals involved when targets are set:
"The key is the relationship between those professional
managers and the target-setting process
about making sure
that the agenda of what needs to happen for performance improvement
in an organisation is driven by professional involvement. If you
have it at ministerial and senior civil service and our level
end and you do not have a full connection with the professionals
then you may end up measuring the wrong things and you may end
up with very poor performing prisons, despite all the targets".[35]
42. Whilst the Treasury claims that it takes into
account many views about measuring targets, all five groups it
mentioned to us are at the top layer of government.[36]
Even senior representative organisations, like the Local Government
Association, can sometimes feel neglected by government when it
comes to setting targets:
"We have not really had sufficient dialogue
about the targets as such. We certainly did not the first time
around. It has improved a bit in the last spending review".[37]
43. Evidence about service delivery must be collected
in the first instance by the people doing the work. Under the
Next Steps initiative so many agencies have become distanced from
central government that Whitehall's capacity to understand and
control evidence of programme performance has weakened, ironically
just as its anxiety to deliver has increased.
44. Many of our witnesses said that there were too
many targets from Whitehall. Mr John Grogono Thomas, a teacher
at Novers Lane Primary School in Bristol, which we visited, said
that "there have become too many targets. They become meaningless
since managing them creates so much bureaucracy that they become
distracting and cannot be effectively delivered. Also pupils cannot
focus on them all".[38]
45. Another problem is the tendency for departments
sometimes to appear to pluck targets out of the air in support
of the latest initiative. Such targets will command neither respect
nor credibility. A number of our witnesses cited the aim to reduce
school truancies by 10% by 2004 compared to 2002 as a prime example
of a target where the objective was seen as relevant and highly
desirable but where the target figure was seen as quite arbitrary.
46. Sir Jeremy Beecham of the Local Government Association
thought the truancy target was "not meaningless in the sense
that it is a figure which might be justified in practice but one
does not know how it has been derived".[39]
Similarly David Hart, General Secretary of the National Association
of Head Teachers (NAHT), could find no objective basis for a figure
of 10%: "I think the reduction in truancy to 10% is not a
bad target but again it is a target plucked out of the air. Why
10%? Why not 15% or 20% or 5%?
It is the percentage figures;
it is the lack of proper consultation and discussion".[40]
And in the end Estelle Morris found, "
it difficult
to use the target to develop the dialogue that should have been
possible".[41]
We share these concerns. It is clearly not sensible to have targets
set in such an arbitrary way.
Failure
to focus on delivering results
47. Do targets actually deliver results? We found
that the Government was achieving the majority of the PSA targets
it had set itself, and that it had fulfilled its requirements
to report performance against them (see below, paras 68-73)
.Yet we discovered that targets and results were different things.
The Government hopes that target setting will encourage service
providers to apply creativity in making their activities contribute
effectively to delivery. But in some cases creativity is being
directed more to ensuring that the figures are right than to improving
services. This is where measurement ceases to be a means to an
end and becomes an end in itself.
48. The danger with a measurement culture is that
excessive attention is given to what can be easily measured, at
the expense of what is difficult or impossible to measure quantitatively
even though this may be fundamental to the service provided (for
example, patient care, community policing, or the time devoted
by a teacher to a child's needs). There is the further danger
that the demands of measurement may be so consuming of time and
effort that they detract from the pursuit of a service's underlying
purpose.
49. The measurement culture is also in danger of
threatening standards. We heard of a number of cases where delivering
on targets seemed to have become more important than delivering
on services. Alarmingly, we received evidence that targets for
ambulance response were jeopardising the effective delivery of
services, and clinical outcomes. The national targets for ambulances
require them to respond to life threatening emergencies within
a certain number of minutes. We took impressive evidence from
the Chief Executive of the Staffordshire Ambulance Service, Roger
Thayne, who had a serious divergence of view with the Department
of Health about the most appropriate measure of ambulance effectiveness.
He argued strongly that the national response time target was
inadequate:
"The NHS Ambulance Service generally accepts
that:
a) There is no uniform standard of measurement of
ambulance response times within Ambulance Services and that the
clock starts at different times which may vary by as much as 3
minutes.
b)The classification of what is a life threatening
emergency differs between Ambulance Services and ranges from less
than 10% of all emergency calls to above 50%".[42]
50. Mr Thayne's view was that the differences between
the starting points, and the varying definitions of a 'life threatening
emergency', cast doubt on the usefulness of the target. He then
went on to suggest that one management response to the target
could undermine professionalism:
"The measurement of response allows the clock
to stop when either an ambulance or qualified 'responder' arrives
on scene. All ambulance services have therefore developed single
paramedic fast response capabilities and many have also introduced
lay first responders to help achieve response times. It is questionable
whether the lay responders are either trained or equipped to meet
the range of emergency conditions to which they are responded".[43]
51. It is clear that 'lay responders' are seen by
some as the product more of pressure to meet targets than of real
professional judgement. Mr Thayne's service meets the national
target, but also has its own, which is based on a variety of indicators,
including cardiac arrest survival and general morbidity statistics.
He told us that the outcomes were excellent, and that he felt
that national targets were not appropriate on their own.
Perverse consequences
52. In another part of the healthcare system, we
had evidence of problems with the consequences of targets for
ophthalmology in Bristol. Dr Richard Harrad, Clinical Director
of the Bristol Eye Hospital, told us:
"The waiting time targets for new outpatient
appointments at the Bristol Eye Hospital have been achieved at
the expense of cancellation and delay of follow-up appointments.
At present we cancel over 1,000 appointments per month. Some patients
have waited 20 months longer than the planned date for their appointment.
We have kept clinical incident forms for all patients, mostly
those with glaucoma or diabetes, who have lost vision as a result
of delayed follow-up; there have been 25 in the past 2 years.
This figure undoubtedly underestimates the true incidence and
of course there is the large backlog of patients still to be seen.
One particularly sad case was that of an elderly lady who was
completely deaf and relied upon signing and lip-reading for communication.
She lives with her disabled husband who like her is completely
deaf. Her follow-up appointment for glaucoma was delayed several
times and during this time her glaucoma deteriorated and she became
totally blind".[44]
53. This is just one example of a wider problem.
Dr Ian Bogle of the BMA had similar evidence:
"In my own area where I have worked for many
years the ophthalmic unit cancelled 19,500 follow-up appointments
in a six-month period so that new patients could be seen to reach
the target for new patients being seen".[45]
When targets put target setting before clinical need,
they are clearly inverting priorities rather than advancing them.
Problems with cross-cutting targets
54. The Government stresses the benefits of targets
as ways of making different services pull together to deliver
results for the public. However, we found that in some cases the
attempt to use cross-boundary and trans-departmental targets as
a means of fostering a more 'joined-up' approach to service delivery
was failing. This was largely because either targets for individual
departments needed to be balanced with priorities in other departments
or, more simply, that they were incompatible. Mr Narey (prisons)
and Mr Neyroud (police) agreed with the Chairman's observation
that, the more the police met their target of closing the justice
gapputting people in prisonthe more difficult it
became for the prison service to meet its own targets on overcrowding
and re-offending.[46]
55. It becomes difficult to prioritise in cases where
targets are shared by more than one department or agency, or where
the department is reliant on others to contribute toward meeting
the targets. In her evidence to us Dr Morgan of the NHS Confederation
saw a problem "at the top, at government level" firstly
in getting agreement on what those joint targets should be, and
then in making sure that every department regarded these as a
top priority.[47]
Cheating
56. The cases mentioned above demonstrate a failure
by Government departments to understand the way things work on
the ground, and to set targets competently. Beyond that, we also
heard accusations of a more direct threat to the public service
ethos: the deliberate falsification of information and failure
to follow proper procedures, amounting at times to cheating.
57. The recent case of a primary head teacher who,
anxious to avoid a low league table placing, helped his pupils
to cheat on Sats tests may be rare.[48]
Both the NUT and the National Association of Head Teachers believed
this to be so. However David Hart of the NAHT thought that such
cases might be on the increase.[49]
58. In the NHS, some accident and emergency units
appear to be prone to creative accounting. In their evidence to
us the BMA, the RCN and the Patients Association all cited examples
where targets for A and E maximum waiting times were being circumvented
by imaginative fixes where trolleys either had their wheels removed
or were re-designated as 'beds on wheels' and corridors and treatment
rooms are re-designated as 'pre-admission units'.[50]
59. The Consumers' Association told us of a range
of near-corrupt practices in ambulance services:
"Some ambulance trusts were massaging their
response times in order to meet Government response-time targets.
For example, in some cases ambulance trusts reported reaching
patients in the near impossible time of less than one minute (and
in one case less than zero seconds). Paramedics also told us that
calls may be re-classified once the ambulance has arrived on the
scene, so a late Category A call may be reclassified as Category
B in order to meet that particular performance target. More worryingly,
Health Which? found direct evidence of pressure being exerted
on paramedics to achieve the response time targets by altering
records".[51]
60. Occasionally, deliberate manipulation of figures
has come to light in other parts of the NHS. In 2001, the NAO
reported on the inappropriate adjustment of waiting lists by nine
NHS Trusts.[52]
The adjustments reduced the apparent numbers of patients on waiting
liststhen a key target for the Department of Healthaffecting
thousands of patients' records, and resulted in delayed treatment
for some.
Failures
in reporting and monitoring
61. In its evidence to us the NAO tactfully described
the Government's reporting against targets as still "developing",
noting the absence of either centrally accepted standards for
reporting performance or of any general requirement for audit
or validation of results reported. Many of the NAO's value for
money reports have examined departments' performance measurement
systems or validated performance data. The NAO reported that in
over 80% of such 'first time' validations, they found that the
organisation had materially misstated their achievements or had
failed to disclose potentially material weaknesses with their
data. In over 70% of validations, there were material inaccuracies
in performance data used to track progress against one or more
key targets. Taking a different frame of analysis, there were
problems with the reporting of around 20% of targets examined.[53]
62. According to the NAO the reason for these problems
was a lack of attention to, or expertise in, performance measurement
and reporting techniques. But the absence of any routine external
validation of the measures meant that there was no external discipline
on trust reporting, and no routine independent review of the quality
of information. Our research into departments' performance showed
up significant variations in how progress against targets was
reported. Typically, departments have been much more forthcoming
about targets they have met rather than those in which there has
been 'slippage' in progress. There is little central guidance
on how such reporting should be carried out. This situation jeopardises
the credibility of the whole policy of government by measurement.
63. Difficulties in monitoring and reporting have
also sometimes been the result of poorly thought-out targets.
The Statistics Commission complained to us that in some policy
areas:
"Targets have been set without consideration
of the practicalities of monitoring and what data already exist.
Sometimes this simply results in the need to collect additional
data, potentially diverting resources from other priorities, but
setting targets without baseline information runs the risk that
targets are set at levels which are unrealistic (or undemanding)
or which may be difficult to monitor effectively".[54]
64. The Commission pointed us to some difficulties
in monitoring particular government services and programmes, including
the NHS Cancer Plan and the campaign against child poverty. On
the latter, they explained that it could take a very long time
to arrive at the accurate figures for broad 'outcome' measures
(often much longer than it takes to arrive at figures about inputs
or outputs).[55]
The Commission reiterated this general point in its 2003 annual
report, remarking that there were several areas in which national
statistics were inadequate for monitoring of targets. It concluded:
"In the absence of good baseline information, the inevitable
arguments about whether such targets have actually been met are
liable to undermine public confidence in government".[56]
65. A particular issue in terms of performance reporting
is that of shared and cross-cutting targets. There are many instances
of PSA targets that are the shared responsibility of more than
one department. Most of these shared targets are contained in
an individual department's PSA, where the same target is replicated
for each department sharing the target. Other targets of this
sort can be found in cross-cutting public service agreements,
where responsibility for the whole PSA is shared between two or
more departments, such as the PSAs on the Sure Start programme
and on the criminal justice system. Normally departments co-ordinate
their reporting on shared targets, but there has been the occasional
example where this has not occurred.[57]
It has sometimes been difficult to follow progress against cross-cutting
PSA targets, where the relevant departments all share responsibility
for the targets, but where in practice accountability for them
might slip between the interdepartmental cracks (for example in
the 1998 PSA targets on action against illegal drugs).
66. Beyond individual departmental failings, there
is the larger question of whether performance against targets
needs to be independently validated. At the moment, all such assessments
are based on departments' own judgements of how well they have
performed against their targets. We doubt whether it is enough
for assessment by government departments (however good the guidelines
from the centre) to be used as the single yardstick. The Sharman
report on audit and accountability in the public sector recommended
independent validation by the NAO.[58]
From April 2003, the NAO has started external validation of the
data systems feeding into performance reporting, as recommended
by Sharman. However, this falls well short of independent external
validation of the actual judgements about whether targets have
been met or not. More needs to be done to ensure the credibility
of the figures.
67. The Office of the Deputy Prime Minister Select
Committee raised this issue in its report on the ODPM's 2002 departmental
annual report, in which it expressed scepticism about the veracity
of the department's reporting on targets. The Committee suggested
that the department had an interest in presenting its performance
in the most favourable light, which had the effect of inhibiting
open and comprehensible reporting. The Committee concluded:
"We heard that the Department monitors its own
progress against its targets. With PSA targets ODPM, like all
government departments, both sets and marks its exam paper. This
undermines the credibility of the Annual Report. The Annual Report
should make clear whether reported progress against each target
has been externally validated in any way. The National Audit Office
will audit the systems used to validate targets from 2003-06;
but validating systems is a long way from validating the targets
themselves. Reported performance can only be credible if targets
are externally monitored, by bodies reporting to Parliament and
not other government departments. We recommend that the National
Audit Office should undertake such monitoring".[59]
68. The continuing arguments about whether targets
have been met illustrate how hard it is to use performance information
without party political considerations getting in the way. It
has become almost impossible to have sensible discussion about
targets because of the way in which the whole issue has become
a political (and media) football. Conflicting claims have emerged
from the Government and from the Conservatives about the actual
number of these targets that have been met. The set of PSA targets
published as part of the 1998 Comprehensive Spending Review is
the first round of targets to complete its life span, covering
the period from 1999 to 2002. As such, it is the only set of PSA
targets for which definitive judgements can be made about whether
the targets have been met or not. In an attempt to clarify the
situation, we tracked progress against every performance target
contained in the 1998 PSAs. The results of this exercise appear
below in summary form, with fuller detail contained in the annex.
69. Our research found that 221 of the 366 performance
targets set out in the 1998 Comprehensive Spending Review were
judged as met, representing 60.4% of the total. In contrast, a
comparatively small number of targets were not met: 36, or 9.8%.
Relatively high percentages were recorded for the number of targets
where no judgement could be made of whether they had been met
or not, since there was either a lack of data on their achievement
(14.2%) or there was simply no final reporting at all on their
achievement (10.4%). However, these totals are skewed somewhat
by the inclusion of results for the smaller departments (these
smaller departments were set service delivery agreements (SDAs)
rather than PSAs in subsequent Spending Reviews, to reflect better
the contribution their targets made to the Government's overall
goals and priorities).
70. An assessment can also be made of how well the
main departments performed against their targets. By taking out
the results for the smaller departments and the cross-cutting
PSAs, we come up with a total of 249 targets for the main departments.
Of these targets, 67.1% were met while 10.0% were not met. The
results can be refined further by looking only at those targets
where a definite outcome was recorded (i.e. the targets were met,
not met or partially met). This leaves a total of 211 targets,
of which the majority (79.1%) were recorded as met, with a further
9.0% partially met and 11.8% reported as not being met. This means
that the main departments, as a group, met or partially met 88.1%
of the targets for which they reported a definite outcome.
71. The findings from our research bear out both
the Government's and the Conservatives' figures. The reason for
this is that, while there appears to be broad agreement on the
raw figures of numbers of targets met, the interpretation and
presentation of them are quite different. The Government maintains
that 87% of the 1998 targets have been met.[60]
However, those citing this statistic sometimes omit to mention
that: (a) it only counts the main departments' targets, not all
of the targets outlined in 1998; (b) it includes targets partially
met as well as those fully met; (c) it only includes targets with
deadlines within the reporting period (1999-2002); and (d) only
targets where performance information is available are included.
Hence, the 86% figure for targets met is quite heavily qualified,
something that is not always made clear.
72. Similarly, the Conservatives' claim that 38%
of the 1998 targets had not been assessed as met needs to be put
in its appropriate context. Our understanding is that included
in this figure are targets which really belong in a separate category,
such as those that have been judged 'partly met', 'almost met',
'ongoing' and those where there is 'insufficient information to
reach a conclusion'. The Conservatives are careful to phrase the
38% statistic as targets that are 'not assessed as met', rather
than 'assessed as not met'. However, this subtle distinction is
likely to be missed by most observersas is reflected in
the news reports based on these figures that said 38% of targets
'have not been met'[61]
or were 'missed'.[62]
Hence, the suggestion that the Government has failed to meet 38%
of its targets is overinflated, since this figure includes targets
which cannot properly be considered 'not met'. All this suggests
to us that there is a strong case for independent valuation of
the figures.
73. Independent verification by a credible external
source would go some way towards dispelling the current confusion
about the precise number of targets that have been achieved. Beyond
this, however, the onus is on those presenting information about
target achievement to make clear what their figures actually refer
to, with appropriate qualifications upfront. As we suggest later,
much of the confusion could be avoided if a definitive official
account of the number of targets met across Government were to
be produced, properly audited and validated by an independent
body.
Confused
accountability and the problems with league tables
74. One major cause of confusion over accountability
is the fact that the centre does not have a strong enough sense
of the importance of geography. Although the Westminster system
tries to centralise the responsibility for the performance of
all public services, the delivery of those services is dispersed,
and often devolved. Most major public services have never been
delivered by Whitehall departments. Departments do not
have their hands on the management of programmes: they supervise
policies for which ministers answer to Parliament. A decade and
more of structural reforms in public administration has increased
the complexity of what is, in effect, multi-layered government.
At the top is a layer of Whitehall departments; in the middle
are a set of institutions, such as local authorities or health
bodies, supervising the delivery of public services. At the bottom
are individuals who meet the public when they go to a school,
a doctor's surgery or a public library. This complex geography
has a profound effect on accountability and motivation.
75. There are therefore fundamental problems with
the accountability of any target that is set centrally without
proper reference to those on the front line. As long as targets
are being met, the centre and local providers can happily claim
ownership and credit. However, if a target is missed, this may
well lead to acrimonious dispute about where blame rests. If impossible
targets are set, then disowning responsibility for pre-ordained
failure will be the first priority of the front line body which
has been assigned such a hopeless task. It is a recipe for the
growth of a blame culture.
76. When government chooses extremely ambitious targets,
there is the danger (whatever the intention) that any achievement
short of 100% success is classified as failure. Simplistic approaches
of this kind, with political and media charges about failures
fully to meet targets, can be profoundly demoralising to school
heads and classroom teachers, police officers and hospital staff
who have worked hard to achieve progress in the face of local
difficulties. Crude league tables and star ratings can be particularly
misleading and demotivating. They tend to make everybody except
the 'league champions' look and feel like a failure. They offer
only a simple snapshot when the reality is much more complex.
77. This leads to tensions, demoralisation and perceived
injustices. John Bangs of the NUT described how he had witnessed
the way that educationalists in Tower Hamlets, where he had taught
for many years, were demoralised by their position in the league
tables:
"For English at Key Stage 2, the national percentage
for getting young children at level onethat is when they
are sevento level four, at the end of Key Stage 2, when
they are 11level one is below the average at Key Stage
1is 32%. In Tower Hamlets, with a Bangladeshi population
of round about 65-70%, and also a big turnover, demographically
shifting all the time, they managed to take level ones to level
fours to 53%. It is over 20% higher than the national average.
This is an enormous success, yet because Tower Hamlets failed
to meet its nationally set target, it is considered to be a stuck
authority
there are better measures than that for evaluating
what is an enormous success for young people and for teachers".[63]
78. The classic example of distorted accountability
at ministerial level is the original numeracy and literacy targets
in the Comprehensive Spending Review. Failure to meet this target
contributed to the resignation of Rt Hon Estelle Morris as Education
Secretary. Yet significant progress had been made, even if this
fell somewhat short of the targets. The outcome of this 'failed'
target actually represented a substantial improvement over the
previous situation. Ms Morris told us:
"I would not have felt the need to resign because
the literacy and numeracy targets had not been met, it was the
best thing I did while I was in office, it is the thing I am hugely
proud of and the government has every right to be proud of. The
difference was I said I would resign if the targets were not met
and at that point it became different".[64]
79. This kind of example seems to us to be accountability
gone mad, a case of process taking over from reality. At lower
levels, we heard evidence of the 'P45 targets', success against
which is seen as crucial to the survival of hospital chief executives.
80. We heard from several witnesses that school league
tables had not reflected the rate of improvement of particular
schools. The NUT also argued that even value-added tables were
not the complete answer, failing to take into account other social
factors such as migration patterns in the school-aged population.
Nevertheless, there were signs that ministers had seriously considered
using such flawed tables to decide on the fate of headteachers.[65]
Whatever the truth of the matter, this is not a message that inspires
confidence that the lesson that crude targeting is counterproductive
has yet been learned in all parts of government.
81. Whereas none of our witnesses suggested that
performance information should not be made publicly available,
its relevance and interpretation were real concerns. Professor
Brighouse saw a "dilemma of competing good".[66]
Whereas improvement requires knowledge and awareness of where
best practice can be found, simplistic interpretation, by the
media among others, distorts this objective, emphasising a crude
form of accountability rather than helping to improve services.
82. Crude league tables do not necessarily help to
identify and disseminate good practice, and are instead "often
used in a primitive way"[67]
and "on balance are very often more harmful than they are
productive".[68]
The evidence we received from professionals supported this view,
with the star ratings for hospitals suffering particular criticism
for their failure to reflect clinical outcomes. The RCN suggested
that far too much was riding on these ratings, including the opportunity
of applying for foundation hospital status :
"Hospital star ratings are a powerful tool as
they are used to determine access to the performance fund, which
amounted to £250 million in 2001-02 and £500 million
in 2003-04, and the extent of 'earned autonomy'
As a consequence,
the need to achieve high star ratings has enormous potential to
distort organisational systems and directly influence staff behaviour
in ways which might not be conducive to patient care".[69]
83. Evidence from the private sector suggest that
league tables, whether used internally or externally, need to
be interpreted with care. Lord Browne said BP's experience with
league tables was "mixed":
"At one stage we did decide to rank order performance
of very small units within one of our divisions, the retail division
I think. This was interesting to start with. It said to people
'I can see where we need to go'. Continuous attention on the league
table, however, made the league table itself the purpose, not
the learning. It is very important, I think, that league tables,
or whatever measurement, should be used to improve and to learn
rather than be the end in itself".[70]
84. Lord Browne told us that in the private sector
it could sometimes be good to fail in relation to a target, if
the failure contributed to organisational learning. The contrast
with the treatment of targets in the political world could not
be more stark.
85. On the other hand, we heard evidence of the more
sophisticated approach embodied in the Comprehensive Performance
Assessments (CPA), which the Audit Commission has introduced into
local government. Using a degree of self-assessment and striving
to put the raw data in the broader context of performance, they
seek to evaluate the capacity and skills of local authorities.
86. There is also a need for greater clarity about
what (and whom) the publication of performance data is for, and
therefore the form that it should take. Is it to enable citizens
to choose? Or to spur providers to do better? Or to offer reassurance
about the spending of public money? Or to provide the basis for
either the grant of greater freedoms or the imposition of greater
controls? There can, of course, be more than one purpose, but
in each case it is important to be clear what these are and, therefore,
what is the most appropriate form of publication of performance
information.
The
measurement culture adapts
87. The case of the CPA is one example of the way
that the measurement culture has, over the years, proved more
adaptable than its harsher critics recognise. Governments have,
since the beginning of the 1990s, recognised that setting targets
and performance management call for skill, care and continuous
learning from experience.[71]
This has led to a flow of guidance from central departments and
others over the last decade, since targets began to be set for
Executive Agencies, and also to statements of explicit policy
changes over time in the PSA White Papers and elsewhere.
88. Since PSAs were introduced in 1998 many changes
have been made and our evidence suggests that the Government is
preparing to make more. As we saw above, the number of targets
has been sharply reduced, from 366 in 1998 to 123 in 2002.[72]
These crude figures may exaggerate the reduction, but the number
of targets has roughly halved to an average of about 6 or 7 per
department.
89. Even at its most vigorous and assertive, in the
first three years of the present administration, the measurement
culture was moderated by common sense and the principles of the
performance culture. Rt Hon Estelle Morris MP crystallised the
point when she reminded us that "
literacy and numeracy
was essentially a professional development strategy. You have
talked about targets but where the money was spent and where the
time was spent was in retraining every single primary school teacher
in best practice English and maths".[73]
90. Thus targets and league tables have to be seen
in context as one of a wider range of measures for improving public
services. More targets are now outcome (or output) related. Floor
targets were introduced in 2000. Some key targets have been changed
(for example the switch between waiting list numbers to waiting
times in health) or abandoned as unhelpful or unrealistic (examples
are drugs and traffic congestion). It is recognised that if targets
are stretching, some of them are likely to be missed, but that
good progress can still be made. Targets have become less rigid,
and more 'aspirational' (although there appears to be little understanding
of the impact of that change in approach).
91. There is also a greater emphasis from the centre
on consultation The 2002 Pre-Budget Report said "all departments
should consult delivery bodies at the target formulation stage".[74]
It is acknowledged within government that more needs to be done.
There has been progress in the relations between central and local
government through the introduction of Local PSAs with targets
which reflect a mixture of central and local priorities, though
the numerical targets are set by central government, and are backed
up by grants to help achieve the targets and by extra freedoms
or flexibilities. The Government has also tried to improve the
quality of performance monitoring and management. Official guidance
on performance information in government appeared in a 2001 publication
called Choosing the Right FABRIC: A Framework for Performance
Information, jointly published by the Treasury and Cabinet
Office among others.[75]
92. This developing tentative acceptance of shortcomings
by the centre is now being matched by an acceptance among professionals
that government by measurement is here to stay. The RCN acknowledged
that targets had some value: "It is unlikely that the Government
will abandon performance management and there is a case that targets
have been central to delivering some significant improvements
in the NHS. Consequently, the RCN believes that performance management
systems should be improved rather than abandoned".[76]
93. Ministers have also made some specific changes
in policy and tone. In the recent DfES White Paper on Excellence
and Enjoyment, a strategy for primary schools, there are signs
of a very different approach to targets. The paper proposes several
striking changes in the regime for primary school targets. Among
other things, it accepts headteachers' arguments that at present
schools sometimes end up with targets which fit LEA or national
targets but which schools do not own, and crucially says that
in future schools will set their own targets with LEA targets
being set afterwards.[77]
94. The problem is that, for all the attempts to
correct the excesses of the measurement culture, the overwhelming
impression from our witnesses was still negative. While the Education
Secretary promotes the idea that national targets for literacy
and numeracy should be treated as less of a mantra, he is accused
of wanting to use the new 'value-added' tables to single out headteachers
for the sack.[78]
95. In the next Chapter, we explore some proposals
for achieving a more sensible and intelligent balance.
21 Cabinet Office (2002), Better government services:
Executive agencies in the 21st century; page 32. Back
22
PST 54 Back
23
Q 72 Back
24
PST 30 Back
25
Q 516 Back
26
Q 515 Back
27
Q 965 Back
28
Q 967 Back
29
Q 66 Back
30
Q 347 Back
31
PST 31A Back
32
Q 538 Back
33
Q 476 Back
34
In particular during discussions with front-line staff during
the Committee's visit to Bristol. Back
35
Q 410 Back
36
PST 60 Back
37
Q 194 Back
38
PST 15 Back
39
Q 199 Back
40
Q 17 Back
41
Q 970 Back
42
PST 07 Back
43
Ibid. Back
44
PST 42 Back
45
Q 2 Back
46
Q 742-43 Back
47
Q 431 Back
48
Guardian and Independent 7 May 2003 Back
49
Q 13 Back
50
Q 2, Q 179 and PST 40 Back
51
Colin Meek "Raising alarm" Health Which? August
2002 Back
52
National Audit Office (2001), Inpatient and outpatient waiting
in the NHS, HC 221 (2001-02); and National Audit Office (2001),
Inappropriate adjustments to NHS waiting lists, HC 452
(2001-02). Back
53
PST 54 Back
54
PST 21 Back
55
Ibid. Back
56
Statistics Commission, Annual Report 2002-03 Back
57
Q 1063 and PST 60 Back
58
'Holding to Account': The Review of Audit and Accountability for
Central Government Feb 2001 Back
59
Departmental Annual Report and Estimates 2002, Fifth Report
2002-03, HC 78-I, para. 19 Back
60
HM Treasury Departmental Report 2003 Back
61
Financial Times 7 July 2003 Back
62
Sunday Times 6 July 2003 Back
63
Q 234 Back
64
Q 949 Back
65
'Unsettling Scores' Times Educational Supplement 27 June 2003 Back
66
Q 375 Back
67
Q 177 Back
68
Q 600 Back
69
PST 40 Back
70
Q 328 Back
71
see HC 482-I Session 2002-03 and HC 563-I 2002-03 Back
72
Comprehensive Spending Reviews 1998 and 2002 Back
73
Q 956 Back
74
'Steering a steady course: Delivering stability, enterprise and
fairness in an uncertain world' HMT November 2002 Back
75
see HMT press release 37/01 Back
76
PST 40 Back
77
'Excellence and Enjoyment: A Strategy for Primary Schools' DfES
May 2003 Back
78
TES Op.cit. Back
|