DFID Annual Report 2008 - International Development Committee Contents


Examination of Witnesses (Quesitons 1-19)

MR DAVID PERETZ, MR ANTHONY KILLICK, MR ROBERT PICCIOTTO AND MS ALISON GIRDWOOD

9 JULY 2008

  Q1 Chairman: Good morning, Mr Peretz, and your colleagues. Thank you for coming to give evidence to us. We had said at the time when Hilary Benn announced the establishment of your committee that we would, in due course, take evidence from you. I think it has taken slightly longer than we intended, but it also fits into the report we will be doing on the department's Annual Report. Clearly, evaluation is an important part of it. I wonder if you could introduce your colleagues. I guess Alison should be introducing herself as she has a slightly oblique role in this.

Mr Peretz: I am David Peretz, and I am Chair of the Independent Advisory Committee on Development Impact. Can I call it IACDI for short? Robert Picciotto is on my left.

  Mr Picciotto: I am a professor at King's College, London, I suppose that I am on the committee because I used to head evaluation at the World Bank and because I sit on the Boards of the UK Evaluation Society and the European Evaluation Society.

  Mr Peretz: Anthony Killick, who is another member of the committee.

  Mr Killick: I am a development economist who has specialised in Africa and I am a Senior Associate of the Overseas Development Institute (ODI) in London.

  Ms Girdwood: I am Alison Girdwood, representing the Evaluation Department at DFID.

  Q2  Chairman: Which the committee did visit when we were in East Kilbride last year.

  Mr Peretz: The Evaluation Department provides the secretariat and secretarial support for our committee.

  Q3  Chairman: Which, I suppose, is a bit of an issue we might want to explore too, but perhaps we can get into the meat of it. Clearly we are in a situation where we have a rising aid budget and we need to know whether that budget has been effectively spent, and a whole raft of questions, obviously, arise out of that. This committee frequently asks the question: what works and how good is DFID at delivering what works? I wonder whether you could give an indication of how your committee goes about its work in answering those kinds of questions. You were part of that discussion at the ODI, but do you accept that that is a good starting point for both DFID and for your work as an independent committee?

  Mr Peretz: I think it is a fine starting point. It is a question of what works and do you learn lessons from things which do not work? Could I make one point at the beginning? I am very glad to have this opportunity to meet with the committee, and I hope we might have a continuing relationship. I am required, as chairman, to write an annual letter to the Secretary of State and copy it to members of this committee, which suggests that you might have quite a role, if you agree with our recommendations, both in helping to make sure they get implemented but perhaps also, and perhaps more immediately, we on the committee would be very interested that your concerns, either at this meeting or in other ways, feed into us. One of the things we are going to have to do quite shortly is decide on, or begin to discuss, a work programme of evaluations for the next three years, and the question of what is to be evaluated is the sort of question on which I would have thought this committee might have views. Shall I just give a brief sketch of what we have done?

  Q4  Chairman: If you could, briefly. I should say, we are aiming for this session to be around about an hour.

  Mr Peretz: I will try and be very quick, partly because we are being very transparent and publishing the minutes of our meetings, which I think you have got, so you will have seen those. One point to make at the beginning is we have seen our role as covering both evaluations carried out centrally in the Evaluation Department of DFID but also evaluations, some people call them self-evaluations, carried out across the department. It is much easier to get a handle on what the Evaluation Department is doing than on what is being done at the various places throughout DFID, but we are trying to look at both, and I think, on the latter, what I would say at this point is my impression is that there is not a very strong culture of self-evaluation in DFID, there needs to be something of a cultural change, and we are thinking about ways you could do that. One of our recommendations is that the central Evaluation Department should have some role in quality assurance and control over self-evaluations. So far we have had three meetings. We have given priority to trying to get the structures and processes right in order to be able, in due course, to give the assurances that our terms of reference require us to do about independence, impact, effectiveness and evaluation. I am going to ask Bob in a minute to say something. We have been looking at how to assure independence and bring it up to international standards. You will have seen in our most recent minutes we have a set of recommendations about that.

  Q5  Chairman: We have, obviously, a number of questions to ask you which will draw out some of these points.

  Mr Peretz: Let me just say that, looking forward for our work plan, at our next meeting we are going to discuss a new evaluation policy for DFID, which I think could be quite important, and begin a process of consultation about a three-year work plan, which I mentioned, for the central Evaluation Department. Do you want to go on with questions?

  Q6  Chairman: We have a number of questions, so I think we can draw things out in the questions. As a supplementary to the more general question, the discussion we have had as a committee (and we have had some advice from the Scrutiny Unit in the House of Commons to try and explore this a bit more) is that DFID in its Annual Report is using its traffic-light measures against the Millennium Development Goals. The Millennium Development Goals are big international objectives and it is pretty difficult to relate back what DFID is doing to achieve those outcomes. DFID, I think, will acknowledge that, but do you feel your approach might help to fill in the space? The sort of thing we have been looking for is that DFID should be saying, if fulfilling the MDGs, either generally or by country that we are engaged in, in part of our objective, how do we identify the specifics that DFID is doing that directly relate to that achievement, as opposed to taking collective credit or blame for something which may have no causal link at all? Is what you are doing trying to help us answer that question? Mr Picciotto is nodding.

  Mr Picciotto: Yes. What you are asking for is at the core of the evaluation function. Namely, you are asking for accountability and for learning about what works and what does not work. This is what an independent evaluation department ought to be delivering. We are focusing on accountability, but one has to be accountable for learning as well: what are the policy principles and the agreed programme goals which have been approved by the governance of DFID? Evaluation checks whether these objectives are relevant, whether the policies are relevant, whether they are being achieved in an efficient way. Next there is the question of impact—what works, what does not work. Impact goes beyond outcomes, and for evaluating impact one needs very sophisticated methods, which are very close to social research. So it is in these two areas that the committee is going to focus. In order to generate the kind of knowledge needed for this, you need two things: independence and quality. Independence is crucial for credibility. As in auditing of accounts, you need an independent function, and that is what the committee focused on at the very beginning and we hope to produce a detailed report on this aspect. We used good practice standards accepted by the international community and based on the experience of governments and auditing organisations, and we worked very systematically through a template designed to assess how independent is the evaluation function in DFID. We started reviewing the quality of the reports. How are they produced? Are the methods right? Are the skills appropriate?

  Q7  Chairman: It will be interesting to see how that develops.

  Mr Peretz: Can I try and address this? I think the question you were asking, Chairman, was about the attribution of what DFID do. There is a big problem of attribution in the aid business, and I should say, these are not issues we have discussed in the committee, I am just talking as an individual, but successful development, it certainly seems to me, is usually the result of something of a team effort between the government of the country crucially, having the right sorts of policies, and a whole set of donors supporting that and other agencies. I have an analogy: it is a bit like trying to assess the performance of an individual member of a football team when you are asking, "What is DFID doing and what is the impact of what it is doing?" An important part of evaluation is to make sure that the whole process is working, that the combination of government policies, support from any donors, is actually reducing poverty, making progress towards the MDGs. Then you have to ask a separate question, and it is like asking what contribution does an individual member of a football team make to the fact that it is a successful team or is at the top of the league table? What is the full-back contributing? You have to ask some rather more subtle questions. It can be done, but it is not a question of being able to say that this money from DFID translates directly into so many children out of poverty or so many people getting HIV/AIDS treatment. It is more a question of asking questions like, as with the football player, "Is the ball being passed to other players at the right time?", and at DFID you would say, "Are they co-operating with others? Is the advice they are giving to government good advice or bad advice? Are they helping to strengthen the national systems of governance?", and so on. These are things that you can evaluate, but it is not a simple thing. I sense that some people are looking for a very simple answer to the question, "How much does each pound a day do in terms of the MDGs?" Tony may want to add something.

  Q8  Mr Crabb: I take the point, Mr Peretz, but DFID does claim that they lift three million people out of poverty every year. Are you saying that that is an impossible claim for the department to make?

  Mr Peretz: Tony may be able to answer. As I understand it, this is based on the Collier/Dollar research, which is a cross-country macro analysis of what you would expect aid to do in good circumstances. It is perfectly reputable research, but it is not looking at—. Tony you probably know more about the research than I do.

  Mr Killick: As David has said, claims of that kind can only be derived from very macro level research. I guess one of the underlying facts on which that claim is based is the fact that, on the whole, British aid is rather well distributed in favour of poor countries by comparison with various other donor organisations. If one takes the view that aid helps reduce poverty, then certainly the orientation of British aid is of a nature that helps in that direction, but evaluation, which is the concern of the committee, operates at a rather different level. It does not operate at that meta-level of trying to estimate numbers of millions of people pulled out of poverty from the programme as a whole. The evaluation is looking at specific interventions and what the effects of those interventions are and the cost-effectiveness thereof, which is a rather more micro-level of examination.

  Chairman: Perhaps we could move on to some other aspects of that. Richard Burden.

  Q9  Richard Burden: I would like, if I may, to ask you a little bit about an area which, I suppose, brings out this problem in very large quantities, which is how you assess the effectiveness of budget supports. 20% of DFID's bilateral aid programme goes through budget supports, very serious amounts of money, and the Public Accounts Committee recently had a look at that and, on the one hand, they seemed to be saying that this did seem to be effective in terms of increasing services to the benefit of the poor, expanding access to free health and education, and so on, and it said that in 6 out of 9 countries they assessed that that was the case, but then it also looked at areas where budget support was not necessarily used and found fairly similar results. How do you think we can assess whether budget support is working or not? What are the kinds of indicators we can use on that?

  Mr Peretz: This is not an issue which my committee has discussed yet, but we will, and I will ask Tony to say something in a minute about it. There was, of course, a big multi-national multi-country evaluation of budget support, which I think this committee has seen and had presented to you, which was pretty positive. My own view, I am speaking personally, but I have seen budget support working in a number of countries and, in the right circumstances, it works very well and it avoids doing what traditional methods of aid often do: it avoids undermining or sidelining the national systems of setting budget priorities and financial management.

  Q10  Richard Burden: It is difficult to know whether the Public Accounts Committee looked at the effect on developing long-term capacity.

  Mr Peretz: That is a big issue. I am personally quite a fan of budget support, but there are circumstances where it is clearly not the right thing to do and there are circumstances where it is. As we talk about a three-year programme of evaluations to be done by DFID Evaluation Department, the question of what evaluation or evaluations should, or might, be done in this area is one that we will address. Tony, you might want to add to this. I know that you have looked at the PAC report.

  Mr Killick: Yes. I share David's view that there is a strong prima facie case in favour of budget support, by comparison with the traditional sort of project-based approach which leads to very fragmented and incoherent and highly costly types of intervention, but budget support is not universally applicable. One is, after all, putting money into fiscal systems which are often quite weak, and so there is an element of risk involved there, sometimes a substantial element of risk. I think risk is perfectly justified, but it does have to be justified in terms of a careful appraisal of the situation before one goes into it. I had a look at the Public Accounts Committee report and one of the things I took from that was its conclusion that DFID, as an institution, is not collecting the information and does not have the systems in place to enable it to reach firm judgments about cost-effectiveness, and I think that reinforces what David said about the need to look at that rather carefully. There has been this multi-donor, multi-country evaluation back in 2006, which arrived at really quite positive conclusions overall, but it was not addressing issues like are there systems in place within DFID to take sensible decisions about cost-effectiveness, and maybe some more process-oriented evaluations of that kind could be a very appropriate response to the PAC report.

  Q11  Richard Burden: It would seem to me that there is likely to be greater focus on this and, therefore, the need to develop robust evaluation mechanisms on that. It is not only necessary, on which I think we are all agreed, but there are also probably quite urgent ones that build in. The point that Malcolm said about issues of building capacity as well as more tangible results. You said this is going to be an area you are going to be looking at. Is there any likely timescale on that? I am looking for developing mechanisms rather than looking at DFID.

  Mr Peretz: What I said was it will be part of our discussion. One of the few things that this committee actually has the power to do, we have the power to agree the future work plan of the Evaluation Department. What we have decided is that we will consult on a three-year work plan, which will start next March, and we will consult over the winter about that with quite a long list. One of the elements of that list will be an evaluation, or several evaluations, of budget support. I should say, though, there are things that certainly could be done in the meantime. The Evaluation Department already do about 5 or 6 country programme evaluations a year looking at individual countries, many of which have budget support as one of the instruments, and one of the things we have agreed as a committee, and I am very glad that the head of the Evaluation Department has volunteered to do this, he is going to write, from now on, an annual report which will look across the evaluations that have been done over the year and try and draw out themes. One of the themes might well be the circumstances where budget support works and where it does not, just looking across those evaluations which have already been done.

  Chairman: Sir Robert Smith has some follow-on questions relating to that. It might be appropriate for him to come in, if that is all right with you.

  Q12  Sir Robert Smith: On the methodology of evaluation and how rigorous it is, it has been put to us that you can have an informal evaluation that says the aim of the intervention by DFID was to achieve the goal of, say, 20,000 odd people getting immunised, or something like that, and then the evaluation comes along and says, "We put the money in and we ticked the box because 20,000 people got immunised". Or is there the more rigorous evaluation which says, "If we had not made the intervention, was that going to happen anyway? Was there a causal link between the intervention and the outcome?" Where on the spectrum does the evaluation tend to take place: more towards the informal or the more rigorous?

  Mr Peretz: This is what I was trying to say earlier. Tracking through money to effects is never that easy. If you say, if we take your example, money was put in for inoculations but, probably, for that to work, it depended not just on the DFID money but on things the government was doing in terms of having health centres and transforming the health service and even things like transport arrangements that get people to the clinics, all these things have several players, and that is the problem with attribution. On actual evaluation methods, I turn to Bob, who is the expert on this.

  Mr Picciotto: There is a need to focus on impact evaluation of the kind that you are suggesting. In fact DFID is chairing a group which is focusing on how the development community should approach impact evaluation. There is a lot of momentum behind randomised control trials where, indeed, you distinguish between two groups: the experimental group and the control group, and you select a group of individuals affected, (let us say by a vaccination programme) on a random basis. This kind of approach removes the bias that exists as you are implying, if one simply looks at what happens before and after the interventions since it may be the result of other factors than the intervention. These are very powerful methods, but on the other hand they have their own limitations. You have something called the Hawthorne effect, where the simple fact of doing the intervention modifies the behaviour of participants; you have the John Henry effect, where people who are not in the control group are also affected; you have a selection bias, and so forth. This is why the evaluation profession is arguing for a mix of methods: randomised control trials, at other methods of matching through propensity scores, interrupted time-series, and other techniques which are very close to research. That is why one of the issues in an evaluation policy is how the evaluation department connects with the research department and the chief economist, and so forth. At the World Bank both the research and the evaluation departments are working together on developing these techniques, but one has to be very careful to triangulate between these techniques and more traditional techniques which go beyond whether the intervention works or not to figure out why does it work, how does it work, who makes it work? One of the problems with impact evaluation of the highly abstract type of the kind I described is that it is weak on accountability, because all it does is answer the question, "does it work or does it not work?" It does not tell who is supposed to make it work, which is the point the Chairman was raising earlier. So we are recommending a mix of methods, and I think it is the choice of methods and their mix which the Evaluation Department should be overseeing.

  Q13  Sir Robert Smith: You mentioned earlier how development and aid is a team effort between the recipient and the donor and many other partners involved, but then, obviously, the partner countries do not necessarily have the resources, the capacity, to gather the data. What is DFID doing to overcome these challenges to make sure the recipients have the resources to take part in the evaluation?

  Mr Peretz: I think it is a very important point. This is the first part of my analogy. How do you judge the performance of the team? Is poverty being reduced? Are more children going to school? Are education and health outcomes better? Are more people being immunised? Collecting statistics and the national capacity to do this is absolutely essential. This is something Alison might want to add to, but my impression is that DFID are doing quite a lot to try and strengthen national systems of statistics and monitoring arrangements of this kind, and it is critical.

  Ms Girdwood: The main areas of work are supporting the Marrakech Action Plan for Statistics, which is mainstreaming strategic planning of statistical systems and assisting capacity to develop national strategies for statistics, supporting internationally the 2010 Census around the Household Survey Network and general statistical capacity building, largely through the Paris 21 Consortium. We are supporting that to quite a large extent.

  Mr Peretz: This is a consortium of donor countries.

  Ms Girdwood: Yes. We are also doing an evaluation through the Joint Paris Declaration, Evaluation of National Statistics Capacity Building, and internationally how best to support it.

  Q14  Sir Robert Smith: So there is proper co-operation between donors as well. Joint evaluations, presumably, can pool resources?

  Mr Peretz: Yes; absolutely. Certainly our committee are very much in favour of joint evaluations as part of the mix, and when you are looking at country programmes certainly the first half of my question—how is the whole thing working—would be much better done that way. On the other hand, there are great problems in actually organising and getting people together to agree to do joint evaluations. It is one of the things where there is quite a lot of money in the Evaluation Department's budget which will be available for this which they are having difficulty spending, but we are going to get on to budgetary issues later.

  Q15  Chairman: I am not sure you are there to help them spend; you are there to help them to get results.

  Mr Peretz: They are having difficulties spending effectively, I should say.

  Q16  Sir Robert Smith: You see a big benefit in joint working, but the barrier is the different methodologies and, presumably, different cultures within the different donors?

  Mr Killick: Yes. There are different methods, different approaches, but also different objectives. Different donors have different objectives and want to get different things out of the system. The point I would be particularly keen to make, in the context of your question, is that we should not have an unrealistic expectation of what aid and what aid donors can do by way of strengthening institutions. What we have learned about the institutional factors, and how institutions change and improve over time is that the key factors are domestic, not external and, of course, donors such as DFID can provide technical assistance, can provide training, other types of resources, but in the end the leadership and the motivation for this has to come from within the countries, which is one of the reasons why one needs to be really quite careful about choosing the countries or the governments that one is assisting in these ways: because in some situations it is so easy for a donor to design a fancy programme for strengthening this institution and that ,but without any real local buy-in that is most unlikely to be effective.

  Mr Picciotto: Quickly, on this particular point, I would say three things. First of all, the Paris Declaration asks for harmonisation across donors for all aid practices, and this applies to evaluation as well. Secondly, harmonisation of evaluation methods is under way both in the multilateral system and in the bilateral system under the OECD Development Assistance Committee, where Nick York who heads the DFID evaluation department is taking over the evaluation working party. Thus the harmonisation evaluation method is very much on the agenda of the evaluation community. Thirdly, there is the issue of evaluation capacity development in poor countries, connected, for example, to the PRSC System (the Poverty Reduction Strategy Paper System) involving selecting the right households and getting the statistical system connected to the evaluation systems. Building evaluation capacity in developing countries is important. It is one of the things we will take a look at in terms of how the Evaluation Department of DFID is contributing to this priority.

  Q17  Jim Sheridan: I think evaluation of how taxpayers' money is used in terms of development is absolutely crucial, and I do not think anyone would deny that, but there is a growing number of evaluators now coming into this field and we have evaluators evaluating the evaluators, which is somewhat concerning. I am just a bit concerned that, if we are going to be focusing on evaluation of evaluation, we are diverting resources away from where it should be going, and that is to the people that actually need it. The question I really want to ask is: what so is unique about your organisation that other organisations have not brought to the table so far? Is there a possibility of duplication, but, most importantly, how can you assist DFID in its overall objective of getting development to the people that need it?

  Mr Peretz: I think we were set up as a committee with terms of reference to assure the independence of evaluation in DFID, to try to make sure that lessons from evaluations are learnt and, where recommendations are made and accepted, that there is follow-up, and also to improve the clout of the evaluation, to get it listened to in the department: because a lot of work is done and there are lessons to be learned. Evaluation is partly about accountability, it is about saying whether a thing is well done or not, but also at least half is about lesson-learning. I think we have begun to make some impact. You will see that one of the things we have got now, we are having an annual report from the department on follow-up to past evaluations so that these are not just books which are put on the shelf, the department is going to account for what it is doing as a result of recommendations. Obviously, some of the recommendations will be rejected, but where they are accepted, there should be follow-through, and one of the things we are doing as a committee is making sure that happens. I think we are taking steps to improve the independence of the central Evaluation Department in DFID, and you will see we have 11 proposals in our latest set of minutes which we are going to take forward, and I will attach them to my annual letter to the department.

  Q18  Jim Sheridan: I would hazard a guess that similar organisations would say exactly the same. Can I ask Alison what is unique?

  Mr Peretz: Can I say, there is one other country which has a committee similar to ours, which is Ireland. We have now some inquiries from other countries. I think the Dutch Head of Evaluation wants to come to our next meeting. It is early days yet, but I think it could have quite a big impact.

  Q19  Jim Sheridan: The point I am trying to make is that there is a finite amount of resources. I do not want resources spent searching for statistics or evaluations. I am interested, Alison, from a DFID perspective, what is unique about this advisory committee?

  Ms Girdwood: Soon it will not be unique. It is actually being copied by a number of our partner evaluation departments. I think the Netherlands are adopting the same model. So I think there is a general feeling across the other agencies that they need something of this sort to strengthen the function internationally.

  Mr Picciotto: You are making two points. First, what is unique about this committee? Your Secondly, who evaluate the evaluators? Our committee does not do evaluation. Is too much going into evaluation whether in DFID or in this committee? The committee has only met three times. My own view is that the resources assigned to the committee are tight, but probably appropriately tight, to make sure there is subsidiarity and we do not start doing other people's work. And if you look at how much resource goes into evaluation in DFID today, it is 0.1% of the total programme budget, and as a percentage of the evaluation budget it is roughly 2%, which is pretty much in line with other aid organisations. The point that the Chairman was making is that, as the programme budget of DFID goes up, for quality purposes, and for accountability purposes, the evaluation budget should also increase in a reasonable way in line with good practice. This committee is filling a gap: since before the committee was established there were questions about how independent is the Evaluation Department in DFID. This is the key value-added by this committee.

  Mr Peretz: The exact figure is 0.09% of the total—that is for the current year—but, as I indicated, some part of that will not be spent because it is actually allocated only for international co-operative evaluations which, as we have discussed, are quite difficult to mount, and that is more than half of it. If you look at the administrative budget, which is what DFID spend themselves and use to spend on consultants for evaluating DFID programmes, it is 0.04% of the total spend this year. I think what has concerned the committee in our discussions so far is not so much the absolute size of these figures, which we have not really discussed, but the fact that this is a declining figure rather than a rising figure at a time when the total programme budget is rising quite fast.

  Daniel Kawczynski: I think my colleague, Mr Sheridan, has touched upon a very important point. We obviously want co-ordination of evaluation, and Mr Picciotto said that there had been three meetings, is that right, that you had had three meetings, and that the budget was how much: 0.1%?



 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2009
Prepared 19 February 2009