EVALUATION, ASSESSMENT AND MONITORING
2.89 As noted above,
POST goes into the question of evaluating the Framework
Programmes in considerable depth. Its conclusions, though expressed
in tones appropriate to a neutral body, are unfavourable: "Despite
the efforts of the Commission and experts outside it, it is not
clear that it is possible to select a number of evaluation techniques
and state that they will correctly assess the impact of the Framework
programmes, and provide information about possible future options.
While economic indicators can give the broad context within which
research and development is being carried out, and individual
evaluations such as UKIMPACT or horizontal work can give `snap-shots'
of particular Member States or sectors, the current approach is
very piecemeal and inconsistent, with results often depending
on who is asking the questions" (POST 5.3).
2.90 POST also
notes "the extent to which evaluations rely on Framework
Programme participants to assess the value of the programmes from
which they have benefited". We can testify to this problem
from our own experience of this inquiry: our witnesses have proved
reluctant to criticise the goose which lays such golden eggs.
2.91 POST records
the outcome of a "meta-evaluation" done by PREST in
1990 for the Commission: The Impact and Utility of European
Commission Research Programme Evaluation Reports. Its conclusions
were "generally positive about the composition, independence
and methodology of the Commission's evaluations, as well as a
high take-up of the suggestions included in these `meta-evaluations'.
The main deficiencies of the evaluation reports so far were the
lack of dissemination of the reports and their results to Member
States and programme participants, in addition to the length of
the evaluations which, combined with a lack of executive summaries,
made them difficult to read and unattractive. There were also
problems to do with timing (with follow-on programmes being decided
before evaluations had been completed), inadequate attention being
paid to policy consideration, particularly those relating to regulations,
and the need to interview non-participants in programmes to gain
an alternative view" (POST 5.2.4).
2.92 POST makes
some recommendations of its own. It suggests that the Commission
could take lessons in evaluation of research from the DTI: it
should expand its evaluation unit and widen its remit to include
the activities of other DGs besides DG XII, adopt a more
coherent strategy along the lines of the approach known in the
DTI as "ROAME", and get beyond its own programme managers
to talk to researchers themselves (POST 5.3). "More
effective and flexible procedures for evaluation are also needed
during the programmes in a timescale which can inform `mid-course'
corrections ... the system should ensure that:
-- the selection
of the members of the evaluation panel is transparent
-- the members
of the panels are drawn from as large a `pool' as possible
-- the panel
members are involved in the evaluation process from the very beginning,
i.e. they need to take part in the drafting of any questionnaires/interviews
which will be used to collect raw data for evaluation
-- a realistic
time-frame needs is in place, so that participants have time to
reply, and the panel members have time to carry out their evaluation
effectively" (POST 6.7).
2.93 The 1995 Monitoring
Panel also made specific recommendations for evaluation. They
called for a system of performance indicators, applicable consistently
to all programmes. "Examples of basic indicators should
include overall expenditure, management costs, and numbers of
students, PhDs, publications and patents." The Commission
replied, "The Commission Services fully accept this recommendation
and have set about co-ordinating the collation of new and existing
performance indicators through the recently established Inter-Service
Group on Monitoring and Evaluation. Furthermore, a project commissioned
at the start of 1996 with two groups of external evaluation experts
should provide some additional advice on possible project level
performance indicators". Tim Gatland (p 186) sounded
a warning about use of milestones: "This leads project teams
to retain focus in unfruitful areas (which might have appeared
relevant at the start of the project) and to achieve milestones
in those areas, at the expense of more applicable work".
21