2 EU Development Assistance: EuropeAid's
evaluation and results-oriented
monitoring systems
Committee's assessment
| Politically important |
Committee's decision | Not cleared from scrutiny; for debate in European Committee B; drawn to the attention of the International Development Committee
|
Document details | European Court of Auditors' (ECA) Special Report: EuropeAid's evaluation and results-oriented monitoring systems
|
Legal base | Article 287(4) TFEU;
|
Department | International Development
|
Document number | (36569),
|
Summary and Committee's conclusions
2.1 Within the European Commission, the DirectorateGeneral
for Development and Cooperation EuropeAid is responsible
for:
formulating
EU development policy and defining sectoral policies in the field
of external aid;
drawing up the multiannual programming
of the external aid instruments together with the European External
Action Service (EEAS); and
fostering coordination between the EU
and the Member States on development cooperation and externally
representing the EU in this field.[14]
2.2 This European Court of Auditors' (ECA) Special
Report looks in detail at EuropeAid's evaluation and Results-Orientated
Monitoring (ROM) systems. ROM is not a full evaluation, but rather
a short, on-the-spot, monitoring exercise.
2.3 Evaluation, on the other hand, is the systematic
and objective assessment of the design, implementation and results
of an ongoing or completed programme or policy. The main purpose
is to assess whether the objectives of a programme have been met
and why, and to formulate recommendations with a view to improving
interventions in the future, and enhancing decision making.
2.4 The Court found that EuropeAid's evaluation and
ROM systems are not sufficiently reliable. Though well-organised,
they lack overall supervision of programme evaluation activities.
Insufficient attention is paid to the efficient use of evaluation
and ROM resources. The evaluation and ROM systems: do not sufficiently
ensure that relevant and robust findings are produced; do not
ensure that maximum use is made of findings; and do not provide
adequate information on results achieved. These factors, the
auditors say, limit considerably EuropeAid's capacity to account
for the actual results achieved.
2.5 The Commission accepts nearly all of the CoA's
recommendations (on the efficient use of evaluation and ROM resources,
the prioritisation and monitoring of evaluations, the implementation
of quality control procedures, the demonstration of results achieved
and the followup
and dissemination of evaluation and ROM findings); but rejects
recommendations to modify the monitoring system so that data remains
available three years after completion of a project and to extend
the follow up period for strategic evaluations.
2.6 The Special Report was published on 11 December
2014 under cover of the following press release:
"Two of the key elements of the accountability
framework operated by the European Commission's Directorate-General
for Development and Cooperation (EuropeAid) are its evaluation
and results-oriented monitoring (ROM) systems. In its special
report published today, the European Court of Auditors (ECA) is
critical of the reliability of these systems.
"Karel Pinxten, the ECA Member responsible
for this report, said: 'The demand for accountability for EU expenditure
in all fields has never been higher. It is not good enough to
report achievements in vague global terms. The Commission needs
to have the building blocks necessary for a comprehensive reporting
system which provides meaningful information for its own management
and for its external stakeholders. One of these components is
a strong evaluation system which feeds into an overall reporting
process. At the present time, EuropeAid's system is inadequate'.
"'The evaluations of projects and programmes
which are organised by Commission delegations and carried out
in partner countries are unsatisfactorily managed: overall supervision
is inadequate, the amount of resources used is unclear and access
to the results of these evaluations is lacking', according to
Mr Pinxten.
"Most programme evaluations are carried
out before the impacts and sustainability of measures can be ascertained.
There is, generally, no requirement for ex-post evaluations
and, as a result, these are rarely carried out. Indeed, whereas
ROM contractors previously carried out ex-post exercises
in a certain percentage of cases, this practice has recently been
discontinued. There is therefore a serious lack of third-party
assessment of impacts and sustainability.
"The auditors found thematic and country
evaluations (strategic evaluations) to be better managed and more
results-focused than programme evaluations. However, the absence
of well-defined objectives and indicators frequently hampers the
work of the evaluators and limits the usefulness of their work.
In addition, the planned strategic evaluation programme for the
2007-2013 period was not executed in full.
"The systems in place do not ensure that
maximum use is made of the findings of the evaluations. Weaknesses
were found in the follow-up not only of programme evaluations
but also of strategic evaluations and ROM findings.
"The detailed recommendations in the report
are intended to pave the way for the necessary improvements. Given
the considerable sums involved, with annual development expenditure
in the region of 8 billion euro, it is imperative that robust
evaluation systems are implemented without delay."[15]
2.7 The Parliamentary Under-Secretary of State at
the Department for International Development (Baroness Northover)
says:
well-functioning
evaluation and monitoring processes are vital to achieving and
demonstrating value for money for EU tax-payers;
the UK's Independent Commission for Aid
Impact (ICAI), the Dutch Government's development agency (IOB)
and the Overseas Development Institute (ODI) have all previously
produced reports with similar criticisms of EuropeAid's evaluation
and monitoring functions;
she and her officials have pressed for
the EU's evaluation and monitoring functions to improve for some
time, including working hard with the Commission and other Member
States to promote results-based programming through the introduction
of a new results monitoring framework;
progress so far has been slower than
she would have wanted, and she and her officials will push the
Commission to accelerate its work in this area;
the Commission's response needs to result
in a real transformation in the way evaluations are conducted
and used;
she would like the Commission to look
further at the case for implementing the ECA recommendation on
enabling monitoring data to remain available three years after
completion of a project, and will raise this at official level;
and
she will continue to press for improvements
in both evaluation and monitoring, and will closely monitor the
Commission's progress in implementing the CoA's recommendations.
2.8 EuropeAid i.e. the Directorate-General
for Development and Cooperation implements a wide range
of the Commission's external assistance instruments financed by
the European Development Funds (EDF) and the general budget; almost
all the EDF interventions are managed by EuropeAid. Yet there
would seem to be a long way to go in the crucial area of metrics.
2.9 We made this same observation only two months
ago, when considering an earlier ECA Special Report No. 16/2014,
which examined the effectiveness of blending regional investment
facility grants with financial institution loans to support EU
external policies. This, too, involved the Commission's ROM process
and methodology and its results framework, and noted that, as
of now, the Commission had yet to establish the sort of results
framework that its counterparts, both international and bilateral
(e.g. DfID) had long-established, to provide an accountability
tool to communicate results to stakeholders and a management tool
to provide performance data to inform management decisions, thus
ensuring that resources are allocated efficiently.
2.10 We also recalled that, in April 2014, the
then Minister (Lynne Featherstone) had:
declared
that better, timelier results data was "vital if we are to
secure good value for money in our development programmes and
demonstrate this to UK taxpayers", and was "something
the UK has been consistently calling for since DfID's Multilateral
Aid Review
was first published in 2011";
pointed out that the proposal was
not something new and that, on the contrary, it would do no more
than bring the EU into line with other multilateral and bilateral
development actors, including her own Department; and
also pointed out that the costs of
implementing a results framework would be "more than offset
in the long run by increased value for money from Commission aid
programmes".[16]
2.11 Yet, under the rubric "Using our experience
to improve the quality of our development engagement", EuropeAid
nonetheless asserts that it "has a long and rich experience
in evaluation" and "recognises that the evaluation of
its interventions is crucial if it is to learn from experience
in order to enhance its effectiveness in development cooperation".[17]
2.12 We suggested that her successor might therefore
need to do a touch more than simply monitor the Commission's progress
in adapting its ROM process and methodology to blending, and in
devising and implementing a proper results framework, if the clearly
defined success criteria were ever to be established that she
rightly regarded as vital to understanding the effectiveness with
which the Commission uses the EU taxpayers' resources in its development
activities around the globe.[18]
2.13 Given these ECA findings regarding the ROM
and evaluation systems themselves, we feel this all the more so.
There is a regrettable air of hand-wringing; of there being little
that can be done other than to keep on knocking on the door.
Conversely, there is little sign of a real drive to back the ECA's
basic conclusion that, to pave the way for the necessary
improvements relating annual development expenditure in the region
of 8 billion, "it is imperative that robust evaluation
systems are implemented without delay". The lack of impetus
is best summed up by the fact that, as the Minister puts it: "No
date has been set for this to go to Council". It is plain
that only if the Council presses the Commission will the necessary
improvements be made in the right timeframe.
2.14 We accordingly recommend that this Special
Report be debated in European Committee, so that the House can
question the Minister further about why a more determined effort
is not being made, and why the Council is not putting its weight
directly to the wheel through the adoption of Council Conclusions.
2.15 We also draw this chapter of our Report to
the attention of the International Development Committee.
Full details of
the documents: European
Court of Auditors' (ECA) Special Report No. 18/2014 : EuropeAid's
evaluation and resultsoriented monitoring systems: (36569),
.
Background
2.16 The European Court of Auditors (ECA) carries
out audits, through which it assesses the collection and spending
of EU funds. It examines whether financial operations have been
properly recorded and disclosed, legally and regularly executed.
It also, via its Special Reports, carries out audits designed
to assess how well EU funds have been managed so as to ensure
economy, efficiency and effectiveness.[19]
2.17 In this Special Report, the ECA looks at what
it describes as two of the key elements of the accountability
framework operated by the European Commission's Directorate-General
for Development and Cooperation (EuropeAid) its evaluation
and results-oriented monitoring (ROM) systems.
2.18 The auditors note that, within the Commission's
decentralised organisational framework, EuropeAid has set up its
own results accountability framework which comprises the monitoring,
evaluation and reporting of its activities:
Evaluation
is "the systematic and objective assessment of an on-going
or completed programme or policy, its design, implementation and
results";
ROM is "a standardised external
review, specific to external aid, designed to look at programmes'
performance".
2.19 The ECA defines the main purposes of these parts
of the accountability framework as "to improve the implementation
of ongoing programmes and the design of future programmes and
policies through feedback and lessons learned, and to provide
a basis for accountability".
2.20 The Court found that EuropeAid's evaluation
and ROM systems are not sufficiently reliable.
2.21 Overall, EuropeAid's evaluation and ROM functions
are judged to be well-organised, but to lack overall supervision
of programme evaluation activities. Also, insufficient attention
is paid to the efficient use of evaluation and ROM resources.
2.22 The evaluation and ROM systems:
do
not sufficiently ensure that relevant and robust findings are
produced (programme evaluation plans are based on insufficiently
clear prioritisation criteria; there is no monitoring system to
identify and address frequent deviations from evaluation plans;
quality control procedures are not implemented consistently for
ROM and programme evaluations);
do not ensure that maximum use is made
of findings (because proper mechanisms are not in place to monitor
their follow-up
and dissemination); and
do not provide adequate information on
results achieved (due to insufficiently well?defined
objectives and indicators, the limited proportion of ex post
evaluations, and ROMs and inherent limitations in the evaluation
methodology for budget support).
2.23 These factors, the auditors say, limit considerably
EuropeAid's capacity to account for the actual results achieved.
2.24 The Court provides recommendations on the efficient
use of evaluation and ROM resources, the prioritisation and monitoring
of evaluations, the implementation of quality control procedures,
the demonstration of results achieved and the follow-up
and dissemination of evaluation and ROM findings.
The Government's view
2.25 In her Explanatory Memorandum of 8 January
2015, the Parliamentary Under-Secretary of State at the Department
for International Development (Baroness Northover) says that,
in its response to the report, the Commission has said that "it
considers that the systems for strategic evaluations overall are
reliable even if they could be improved", but has accepted
nearly all of the CoA's recommendations; the exceptions being:
"a
recommendation to modify the monitoring system so that data remains
available three years after completion of a project (the Commission
rejected this recommendation on the basis that the benefit is
not yet shown); and
"a recommendation to extend the
follow up period for strategic evaluations (the Commission partially
accepted this, subject to its own further analysis)".
2.26 She then continues as follows:
"Well-functioning evaluation and monitoring
processes are vital to achieving and demonstrating value for money
for EU tax-payers. The CoA report is critical of EuropeAid's evaluation
and ROM functions. It finds that whilst they are generally well
organised, individual evaluations and monitoring exercises are
of variable quality, and there is no systematic method of ensuring
that evaluations actually lead to improvement in programme quality.
The UK's Independent Commission for Aid Impact (ICAI), the Dutch
Government's development agency (IOB) and the Overseas Development
Institute (ODI) have all previously produced reports with similar
criticisms of EuropeAid's evaluation and monitoring functions.
"The Commission have acknowledged the need
to improve, and have accepted the vast majority of the CoA's recommendations.
Once implemented, we would expect these recommendations to lead
to an improvement in the Commission's monitoring and evaluation
effort, which should generate better information on which to base
decisions about projects and programs, and ultimately deliver
better value for money."
The Government's view
2.27 The Minister continues her comments thus:
"The UK has pressed for the EU's evaluation
and monitoring functions to improve for some time, including working
hard with the Commission and other Member States to promote results-based
programming through the introduction of a new results monitoring
framework. Evaluation and monitoring are essential to ensuring
that the Commission gets value for money for tax-payers, and learning
so as to improve policy and practice.
"The UK welcomes that this report has shed further
light on this important topic. The UK also welcomes the Commission's
clear acceptance of the need to improve, and its commitment to
implement the majority of the CoA's recommendations. Progress
so far has been slower than we have wanted, and we will push the
Commission to accelerate its work in this area. The UK is clear
that the CoA's report is not a confirmation of the Commission's
existing approach, and the Commission's response needs to result
in a real transformation in the way evaluations are conducted
and used.
"On the CoA recommendation that the Commission
did not accept (to modify the monitoring system so that data remains
available three years after completion of a project), the UK would
like the Commission to look further at the case for implementing
this, and will raise at an official level.
"The UK will continue to press for improvements
in both evaluation and monitoring, and will closely monitor the
Commission's progress in implementing the CoA's recommendations.
The UK has a seconded national expert working in the Commission's
evaluation unit and two seconded national experts in the results
unit. We will continue to use these positions to offer technical
support to the Commission on results and evaluation."
Previous Committee Reports
None, but see (36451), : Twentieth Report
HC 219-xix (2014-15), chapter 14 (19 November 2014).
14 For full information, see DG DEVCO. Back
15
See the ECA Special Report. Back
16
See (35735), 17709/13: Forty-seventh Report HC 83-xlii (2013-14),
chapter 1 (30 April 2014), Paving the way for an EU Development
and Cooperation Results Framework. Back
17
See EuropeAid. Back
18
See (36451), -: Twentieth Report HC 219-xix (2014-15), chapter 14
(19 November 2014). Back
19
See European Court of Auditors. Back
|