Written evidence submitted by the National
Audit Office
MEASURING GOVERNMENT PERFORMANCELESSONS
FROM THE PSA ERA
INTRODUCTION
1. In July 2010 the National Audit Office reported
on the main achievements of the previous Government's Public Service
Agreement (PSA) framework and areas for consideration in the light
of the fiscal challenges the public sector faces in the coming
years.[6]
This briefing sets out the findings from that report and has been
prepared with a view to assisting the Treasury Committee in its
consideration of this subject.
BACKGROUND
2. Measuring Government performance is vitally
important. A good framework of measures shows taxpayers what they
are getting for their money. It also enables the Government itself
to assess whether it is achieving its key objectives and to learn
how to achieve them more effectively and at lower cost.
3. In 1998 the then Government introduced a
PSA framework as the primary means to set its top priority objectives
and measure performance against them. Departments agreed PSAs
with the Treasury as part of the Spending Review process held
every two to three years. In return for funding, departments agreed
to deliver key outcomes such as reducing child poverty, tackling
climate change, and improving healthcare. Since 2007, all these
key outcomes have cut across several departments. In 2007, the
Government added Departmental Strategic Objectives (DSOs) to the
framework, designed to formulate and measure objectives specific
to each Department. Each PSA and DSO was measured using a small
basket of indicators, typically between four and six. In 2007
there were 30 PSAs supported by 152 indicators.
4. We reviewed the PSA and DSO Framework and
identified four factors that are crucial for effective accountability
and performance management:
Clear objectives that capture the outcomes
that matter most to the Government.
Measurement of how far outcomes are attributable
to the Government's performance rather than other factors which
the Government did not influence.
Information which highlights what performance
cost and how to improve cost-effectiveness.
Reliable, easy-to-interpret reporting.
CLEAR OBJECTIVES
AND SUITABLE
INDICATORS
Clarity of objectives
5. PSA aims were broad statements of purpose
which, taken in isolation, could often be vague and open to interpretation.
For example, PSA 17 from the 2007 Comprehensive Spending Review
(CSR 2007), led by the Department for Work and Pensions, was "to
tackle poverty and promote greater independence and well-being
in later life". However, these aims were underpinned by a
series of subsidiary measures and explanations which generally
added clarification and specificity. CSR 2007 supported the overall
aim for each PSA with a Delivery Agreement containing detailed
explanations of the Vision behind the PSA, how the PSA would be
delivered, the indicators that would be used to assess progress,
and the data sources and analysis which would support the indicator.
Where these mechanisms were well-implemented the result was a
clear and detailed understanding of the PSA.
6. Problems with lack of clarity persisted in
some cases. We identified three types of problem:
The system described above was not always
applied with sufficient rigour. The headline description of PSA
14, for exampleto "Increase the number of children
and young people on the path to success"depended on
what one meant by "the path to success". The Delivery
Agreement sought to spell out what this meant by describing five
sub-outcomes, but there was no straightforward read-across from
these outcomes to the specified indicators used to measure the
PSA. This made it more difficult to judge progress against the
outcomes sought.
It was inherently difficult to capture
performance against some of Government's core objectives in clearly
measurable terms in a PSA. For example, PSA 30, to "Reduce
the impact of conflict through enhanced UK and international efforts"
(led by the Foreign and Commonwealth Office) expressed an important
aim of Government, but one where it was very difficult to measure
the UK's contribution to reducing the impact of conflict.
The measurement mechanisms applied to
PSAs were not fully applied to DSOs. We found the DSOs to be substantially
less well-specified than PSAs, although in principle they represented
a useful mechanism to capture the wider business of a department.
ADEQUACY OF
THE INDICATOR
SETS
7. PSAs were measured through a small set of
indicators that were meant to capture the main thrust of the PSA.
So, for example, PSA 9 (to "Halve the number of children
in poverty by 2010-11, on the way to eradicating child poverty
by 2020") is measured by three indicators: Children in absolute
low-income households; Children in relative low-income households;
and Children in relative low-income households and material deprivation.
8. The Treasury did not intend that the indicators
should provide exhaustive coverage of the PSA objectives. However,
it did envisage that coverage should be reasonably comprehensiveclearly
necessary if progress towards the PSA objective was to be fairly
assessed. We reviewed the set of indicators for each PSA to judge
whether they offered a reasonable overview of progress and concluded
that they did for most of the PSAs (22 out of 29, or 76%). However,
for seven of the PSAs (24%), we concluded that the indicator set
was not broad enough to provide a reasonable overview of progress
against the whole PSA.
CLEAR SUCCESS
CRITERIA
9. Until CSR 2007, PSA success was defined by
reference to targets. At their simplest, these targets specified
a discrete, quantified measure of successfor example "by
31 March 2002... reduce the backlog of council house repairs by
at least 250,000". But often the targets were less precise,
especially when trying to capture complicated social outcomes.
There has been a gradual move away from specific targets towards
the use of other success criteria and, in CSR 2007, the Treasury
explicitly directed a move away from setting a specific target
unless a department was confident it offered the best approach
to driving delivery. The Treasury guidance did advise Departments
to define how "success" would be measured.
10. The move away from targets led to greater
emphasis on different ways of measuring success against PSA objectives.
This move prompted useful further thought about the levers and
incentives that could be used to promote better performance. It
also avoided pressures to set poorly informed outcome targets,
where research was insufficient to identify a stretching but achievable
performance level, or where Government influence over outcomes
was too small to make a target meaningful. However, it also made
more difficult the task of defining what constituted success.
Our work suggests departments did not always meet this challenge.
For 36% of all CSR 2007 PSA indicators, the basis for claiming
success was contestable or unclear. That in turn caused problems
in interpreting performance against each PSA as a whole, where
success against individual indicators within the PSA could have
different levels of clarity.
MATCHING DATA
SYSTEMS TO
INDICATORS
11. The indicators measuring the PSAs were themselves
supported by data systems. For the PSA Framework to operate successfully,
the data systems needed to be well-matched to the indicators they
were meant to support. However, in our CSR 2007 work, we found
that there were mis-matches for a significant number of indicators:
in 41% of cases the data system was not wholly appropriate to
the indicator for monitoring purposes, and in 9% of cases the
data system did not measure all elements of the indicator.
GOVERNMENT CONTRIBUTION
TO OUTCOMES
12. PSA indicators did not usually track the
result of Government activity alone: for example, health outcomes
may be the result of improved health services or reflect the influence
of government health campaigns but may also be the effect of life-style
choices over which the Government has limited influence. However,
the PSA indicators generally did not make this distinction clear,
rarely measuring just the effect of what the Government did. There
was no requirement to identify and report Government delivered
or funded outputs that contribute to outcomes since they were
not part of the PSA indicators. As a result, movements in PSA
indicators have not been a sufficient basis for assessing Government
performance. This weakness hindered accountability as well as
performance management, and meant there was no clear business
model or set of assumptions to refine in light of experience.
ASSESSING COST
EFFECTIVENESS
13. PSAs were designed to promote accountability
for spend. PSAs, and later DSOs, were announced as part of each
spending review but there was no mechanical link between the budgets
set and performance sought. In fact, in CSR 2007 some departmental
budgets were settled before the PSA set was announced, although
development of PSAs was occurring in parallel with negotiations.
Work on DSOs continued well after that the PSA set was announced.
14. Increasing fiscal pressures have highlighted
the need to be able to cost delivery of PSAs and DSOs. Government
has not, however, developed an accountancy approach to link expenditure
with outcomes. In general, there is a lag between outputs being
delivered and outcomes achievedseveral years in areas such
as education for example. The Treasury designated DSOs as the
basis for segmentation of total expenditure in schedule five of
a department's published accounts, but there was not necessarily
any direct relationship between a given year's expenditure under
a DSO, and the outcomes being reported in the same year for that
DSO.
15. Financial information has been poorly linked
with individual indicators. Annual departmental expenditure, although
reported by DSO, has not been broken down to the level of the
indicators used to report progress, and so is not readily usable
for deeper analysis of the cost of progress. Separate value for
money targets have been set in successive Spending Reviews, but
these targets have centred on cost cuts and transfers, and have
not been linked to PSA or DSO programme efficiency. This situation
hinders informed strategic decision-making because it is not clear
what allocation of available resources could achieve the best
overall results.
Transparent and reliable reporting
16. Producing and reporting reliable data are
essential elements in accountability and performance management.
A performance framework, however well-designed, can only be as
good as the base data it is using and how well those data are
reported.
RELIABILITY OF
PSA AND DSO DATA
SYSTEMS
17. The results of our validation of PSA data
systems since Spending Review 2002 are shown in the following
figure. Less than a third of SR 2002 systems were fit for purpose,
but this rose to more than half in CSR 2007: a clearly substantial
improvement. However, 10% of systems remained not fit for purpose
and 33% had weaknesses that prevented them being classed as fully
fit for purpose. Common weaknesses included the need to strengthen
controls to mitigate identified risks to data quality, or inadequate
disclosure of measurement policies and limitations.

18. The results of our validation of DSO data
systems from 2007 showed that the situation was appreciably less
favourable than for PSAs. Less than half of DSO systems were fully
fit for purpose, and 17% were actually rated not fit for purpose.
These findings accord with the results from our Value for Money
work. Since 2001, some 20% of Value for Money recommendations
have related to the inadequacy of performance information.
TRANSPARENT REPORTING
19. The Government published a lot of progress
information relating to its key objectives and, under CSR 2007,
clearer statements of measurement policies and practices. For
20% of PSA indicators, however, we found that departments did
not adequately report limitations to measurement or needed to
provide more contextual information to assist the reader to understand
performance.
20. Since CSR 2007, the Treasury has required
departments to publish actual data alongside performance narratives
and assessments, but there are no accepted professional standards
for reporting departmental performance, unlike financial reporting.
SCOPE FOR
IMPROVING DATA
QUALITY
21. Government has taken a number of actions
to improve data quality, but our validation work has shown scope
to get more value from these actions through wider or more rigorous
implementation. The paragraphs below describe three areas where
further value could be obtained.
22. Transferring lessons to new data systemsDepartments
improved data systems used from one Spending Review to the next,
but data systems which they put into operation for the first time
did not reflect that learning. Greater attention to embedding
experience and good practices would help get more value from investment
in information systems.
23. More rigorous implementation of initiatives
to improve data qualityCentral initiatives to improve
data quality have secured some improvements, but there is scope
to secure greater impact from them. For example:
Guidance on good practiceUnder
CSR 2007, the Treasury issued comprehensive guidance on the development
of indicators. Departments did not consistently apply the guidance
and the Treasury did not enforce its application.
Data Quality OfficersThe Treasury
introduced a requirement for designated senior leads responsible
for the quality of data, separate from colleagues responsible
for performance success. In practice, the degree to which relevant
posts have been filled has varied, as has the level of authority
and support given to the post.
Challenge panelsThe Treasury invited
selected stakeholders to challenge the proposed sets of PSA indicators
and draft Delivery Agreement prior to sign-off. In practice panels
focused mainly on the substance of the indicators and Agreements,
not on the more technical aspects of measurement.
OUR OVERALL
CONCLUSION AND
RECOMMENDATIONS
24. The PSA framework provided a clear focus
on the objectives that mattered for Government, and was gradually
improved over the years. Published Delivery Agreements and associated
Measurement Annexes made it easier to understand the contributions
expected from the various delivery partners and how they intended
to assess progress. The clarity and presentation of PSA monitoring
information also improved making it easier to understand the significance
of performance issues arising. Weaknesses in the operation and
design of the framework, however, mean that accountability has
not been as strong as it should have beenparticularly in
the framework's ability to inform judgements of cost-effectiveness.
25. Performance measurement arrangements under
the new Government will need to be tailored to its objectives
and the delivery models it chooses to operate. Lessons from the
strengths and weaknesses of the PSA system that it should consider
in any new measurement systems include the importance of:
clearly and unambiguously expressed objectives,
indicators and success criteria;
an explicit published "business
model" linking inputs (the resources used) through outputs
(goods and services delivered) to outcomes (the impact on society),
used as a basis for measurement and reporting. Such a "clear
line of sight" between inputs and outcomes should help interpret
performance, and to promote lesson learning and the refinement
of the model over time;
firm integration of performance measurement
into public bodies' management systems, such as budgeting, resource
planning and allocation, programme evaluation and performance
review processesso that lower-level management systems
feed into and support top-level objectives; and
departmental information strategies that
define the range of contextual and performance information needed
to assess progress and value for money. The strategy should state
data quality standards, and set up arrangements to provide assurance
that those standards are met. This will enable Government to produce
clearer and more robust performance information.
November 2010
6 Taking the Measure of Government Performance, Report
by the Comptroller and Auditor General, HC 284 of Session 2010-11. Back
|