Select Committee on International Development Seventh Report


6  Reform of the humanitarian sector: measuring needs and performance

108. Measurement of humanitarian needs and the performance of aid agencies is the third area which has been identified as in need of reform. Particular issues include: improving the measurement of need; enhancing evaluation of the performance of humanitarian actors; ensuring the implementation of lessons learned from evaluations; and establishing the accountability of humanitarian actors.

109. In recent years, and particularly as a result of the Joint Evaluation of Emergency Assistance to Rwanda (JEEAR) in 1996, a number of initiatives have been developed to address these issues. DFID has been supportive of these initiatives and provided funding to several. It seems that lack of progress in this area must be due to a deficiency in donor will, rather than a lack of initiatives, as these seem to have proliferated. The most prominent include:

  • Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) established in 1997, as a sector-wide network to provide a forum on learning, accountability and performance issues for the humanitarian sector;
  • Humanitarian Accountability Partnership — International (HAP-I) founded by humanitarian agencies in 2003 with the aim of making humanitarian work more accountable to its intended beneficiaries;
  • People in Aid established in 1995 as a central resource to assist the humanitarian agencies to improve the quality of their human resource management;
  • The Sphere Project initiated by humanitarian NGOs and the Red Cross/Red Crescent movement in 1997 to produce an Humanitarian Charter and handbook of standards for humanitarian work; and
  • Good Humanitarian Donorship Initiative (GHD) established in 2003, to provide a forum for donors to discuss good practice in funding humanitarian assistance and other shared concerns, to define principles and standards, and to provide a mechanism for encouraging greater donor accountability.

Needs assessment

110. In its written submission the British Red Cross emphasised the importance of "the quality, accuracy and speed of assessments on the ground provided by key actors" in affecting DFID's decisions about how to respond to humanitarian disasters. DFID told us that "humanitarian operations remain characterised by a weak evidence base."[162] The lack of an agreed common basis for measuring and comparing levels of humanitarian need continues to present a major obstacle to prioritisation, impartial decision-making and accountability in the humanitarian sector. John Scicchitano from USAID used the example of the international response to the famine in the Sahel in 2005 to warn of the dangers of implementing humanitarian interventions based on inadequate data and understandings of need: "Much of the 2005 CAP funding for the Sahel came after heavy media coverage… Such reports successfully galvanized attention to an acute need, but donors and implementing agencies would be wise to look beyond the media reports towards technical assessments of the need."[163] The British Red Cross told us: "A regular finding from evaluations is the weakness of needs assessment practice."[164] Oxfam commented that: "Assessment and monitoring systems must be refined to better identify needs, to track earlier-stage crisis indicators, and to serve as a guide for more equitable allocation of global resources."[165]

111. Our predecessor Committee commented on the inadequacy of humanitarian data collection and analysis in its 2005 report Darfur, Sudan: The responsibility to protect.[166] The Committee was concerned by the "extremely misleading" mortality data produced by the WHO, which only included violent deaths among internally displaced people (IDPs) who had managed to reach IDP camps. In its response to the report, HMG told us:

"We recognise the limitations of both the WHO's survey and other studies and agree that more accurate data is needed… We agree that it is very important the statistics are presented in such a way that they are not open to misinterpretation, and will be working with WHO to ensure this."

The humanitarian crisis in Darfur highlighted the potential consequences of inadequacies in data collection and analysis, and the importance of progress in this area.

112. The development of benchmarks based upon a core set of common indicators provides a possible way to improve the ability of needs assessments to inform resource allocation. Need is usually measured by malnutrition and mortality indicators, and DFID told us that the WHO has begun work to develop new global benchmarks for mortality and malnutrition.[167] We are pleased to see the commitment made by DFID in their response to the OECD DAC Peer Review of the UK to:

"Strengthen the evidence base for DFID's decision-making. This will be done by working with international partners to improve the quality and timeliness of data on humanitarian need, in particular malnutrition, mortality and other selected performance indicators."[168]

We are concerned that any new initiative to develop benchmarks for humanitarian need should not "reinvent the wheel". Nonetheless there is considerable support for such an initiative, which donors must note would need to be adequately resourced. We look forward to hearing about DFID's progress on benchmarking.

113. In addition to agreeing a common basis for measuring and comparing levels of humanitarian need, humanitarian actors must work to improve their data collection practices, in order to ensure the quality of the data they collect. We heard evidence that the most important way of ensuring that needs assessments are accurate and realistic is to involve those affected by a disaster. We heard a great deal of rhetoric about the importance of community consultation in needs assessments, but we remain unconvinced that this is universally reflected in practice.

114. During our visit to Pakistan we visited a village in North-West Frontier Province, which had been affected by the earthquake. The Norwegian Refugee Council (NRC) who had provided emergency shelter in the village kindly facilitated the visit. A staff member of the NRC told us the agency's standard practice was to send a three person team to assess the needs of each village, and that in order to ensure a gender-balance in the needs assessment at least one member of each team had to be a woman. When female members of our group sought an opportunity to talk to women in the village, however, the women told them that they were the first women to come to the village in the aftermath of the earthquake, and that the women had not been asked for their views on the emergency shelter programme. It would be unfair to judge the efficacy of NRC's needs assessment process on the grounds of this single anecdote, and the women we spoke to were very satisfied with the emergency shelter which had been provided, but this example does illustrate the potential gulf between rhetoric and reality in terms of needs assessment.

115. This example also highlights the difficulty for DFID of evaluating the effectiveness of the needs assessment procedures practiced by its NGO partners. This issue relates to a broader one arising from DFID's increasing engagement in fragile states and other difficult environments, where the difficulty of monitoring partners and the projects they implement increases the risk of under-performance or even fraud. We recommend that DFID gives detailed consideration to the issue of how to monitor and evaluate the work of its partner agencies in countries where conflict or other dangerous conditions make first-hand assessments by DFID staff infrequent or impossible.

116. Several NGOs highlighted the importance, once data has been collected, of disaggregating the resulting analysis of needs according to different criteria. Oxfam and Womankind Worldwide both told us: "There is a need to disaggregate assessment information on gender lines to better understand unique vulnerabilities."[169] HelpAge International emphasised the need to disaggregate data according to age "in order to design an inclusive and effective response based on need."[170]

Evaluating the performance of humanitarian actors

117. In its written evidence ActionAid International drew on an analysis of the international response to the South Asia Earthquake to highlight one of the dangers of inadequate evaluation of humanitarian interventions: "Monitoring and evaluation were poor, and unless these issues are improved the British public are likely to be less responsive to future emergency appeals. There needs to be more emphasis on monitoring and oversight of equality and equity with funding allocated to this."[171]

118. Effective evaluation is naturally dependent on an accurate assessment of the needs the intervention was seeking to address. This further strengthens the case outlined above for improving needs assessment in the humanitarian sector. A second prerequisite for effective evaluation is the existence of agreed benchmarks against which the quality of responses can be measured. We heard that the quality of such evaluations has increased since the Joint Evaluation of Emergency Assistance to Rwanda (JEEAR) in 1996.[172] At the moment the key benchmarks for the quality of responses to the needs of those affected by a disaster include Sphere standards and the Red Cross/Red Crescent/NGO Code of Conduct. Nicholas Stockton told us that: "We do have now a way of talking about humanitarian need, thanks to the Sphere Project, that enables us to have a genuine conversation about relatively objective observed needs and also responses around the main areas of humanitarian assistance."[173] The Sphere standards have been criticised, however, for being too aspirational and for employing indicators which are not context sensitive. According to the ICVA the standards are predominantly used by operational staff, through whose experience they were framed, and are not well known at the level of senior humanitarian coordinators.[174]

119. The Secretary of State has been a strong advocate of "the need to set benchmarks for the scale and speed of the response we require the humanitarian system to provide" and to create "standards against which we can hold agencies to account."[175] These goals have been encompassed within the agenda of the Good Humanitarian Donorship Initiative (GHD) which has been strongly championed by DFID. The HRR also called for a new focus on benchmarks but laid a different emphasis, on the need to measure response in the first four weeks of a crisis.[176] ICVA question how "the first four weeks" would be defined in a slow onset disaster and complain that Jan Egeland has de-prioritised benchmarking so that it has not progressed as fast as it might have done.[177] Nonetheless the Secretary of State`s call has generated an important debate around the nature and quality of the data upon which humanitarian actors premise their response.

Lesson-learning

120. Evaluating humanitarian interventions is pointless unless lessons are learnt from the evaluations and applied in future disaster scenarios. Good evaluations do not necessarily lead to good impacts.[178] We were told that lesson-learning does not happen consistently. Tearfund said: "It is apparent that lessons are seldom learnt from previous disasters."[179] John Mitchell from ALNAP informed us that "to try to improve learning through evaluation"[180] was his organisation's key mandate. ALNAP produces an annual Review of Humanitarian Action in which it evaluates the quality of the evaluations that have been undertaken that year by its 56 members in the humanitarian sector, and produces an evaluation synthesis. The goal is to improve performance in the humanitarian sector by improving the quality of evaluations and facilitating learning from those evaluations. ALNAP also produces "Lessons learned" papers in the aftermath of specific disasters, drawing together relevant lessons learnt through evaluations of previous similar disasters.[181] Nonetheless, responding to a question about the extent to which the lessons learnt from evaluations are put into practice, John Mitchell said:

"I think people are trying, but I cannot put my hand on my heart and say there has been a significant improvement. Every year we do an evaluation synthesis of ALNAP and the same lessons are coming round time and time again now. There are about eight or ten key areas that we see keep recurring."[182]

121. The British Red Cross recommended the use of real time evaluations to achieve lesson learning in the context of an ongoing response:

"Our experience is that real time evaluations that provide feedback to the concerned implementers as they manage a response are a useful tool for learning and improving practice. We thus recommend increasing usage of this tool provided that it is understood and used appropriately by professionals with demonstrated skill. Another challenge, particularly important with real-time evaluations, is to communicate to the media and the public complex messages from evaluations in an accurate and intelligible manner, and to ensure that these messages are not deformed, oversimplified, or used out of context by the media."[183]

It seems likely that evaluations undertaken while a response is ongoing are more likely to have an impact than those undertaken once a response is complete.

Accountability

122. Nicholas Stockton told us that in his view, the humanitarian sector was "profoundly unaccountable." He went on to explain:

"…as a system… concerned with the delivery of services and goods to people, probably more than any other system I can think of, it is one characterised by the profound imbalance of power between those who provide assistance and those who receive it. Economists talk about this as the principal/agent relationship. The greater the imbalance of power between the provider and agent of services and goods and the principal (the people who are supposed to be provided with some kind of service or receive assistance) the greater are the risks of those services being both inefficient and ineffective."[184]

Mr Stockton's explanation highlights the key concern of HAP-I, to improve the accountability of the humanitarian sector to its beneficiaries, so-called "downwards accountability", on the basis that most accountability in the sector is currently "upwards" towards multilateral agencies and donors.

123. Tearfund's memorandum supported Nicholas Stockton's argument: "Downwards accountability to affected communities remains extremely weak, despite there being a direct correlation between this type of accountability and the quality of work."[185] Mr Stockton attributed the weakness of downwards accountability to the "very, very long, extraordinarily long chains of accountability" which make it a "very, very rare event" for donors such as DFID to be "in a position to listen to complaints from beneficiaries, survivors of disasters, and take serious account of their views." In his view, DFID needs to be more rigorous about ensuring that its partners are soliciting and taking on board the views of beneficiaries:

"I would like DFID to consistently ask of an organisation which asks for humanitarian funding: Will you talk to the people that you say you are going to assist, in order that you can ascertain that what you are proposing to do is what they need? Early consultation would be my very first question. My second question would be: Do you then ensure that there is public information provided about what you are going to do to people that need it?... Do you provide clear information in a language that is accessible to those people who need to interpret it, if you like, into their own language in a way that enables them to plan and review your response and, if necessary, tell you that you are doing the wrong thing? Do you ensure that there are mechanisms thereafter for feedback, including, complaints and so on?"[186]

124. The 2006 OECD DAC Peer Review of the UK included a focus on the UK's humanitarian assistance. It found that:

"UK's approach to ensure adequate involvement of beneficiaries in the design, implementation, monitoring and evaluation of its humanitarian activities is not clear and at field level it is recognised that this issue will have to be addressed to further strengthen capacity building and advance the design of needs based response."[187]

DFID did not respond to this specific issue in its response to the DAC Review,[188] although in its written memorandum the Department admits: "we recognise that there remains a need to improve how existing standards are applied and monitored, and to find better ways of enabling beneficiaries to contribute and hold humanitarian actors responsible."[189] We recommend that DFID clarifies its approach to ensuring the involvement of beneficiaries in the design, implementation, monitoring and evaluation of its humanitarian activities, and affirm its commitment to tackling this issue at headquarters as well as field level. The involvement of beneficiaries should include opportunities for recipient states and populations to input into dialogue on and review of proposals for humanitarian reform.

Good Humanitarian Donorship Initiative (GHD)

125. As the GHD initiative was established only in 2003, it remains early to evaluate the impact it has had, although the evidence we received indicates there is support for the initiative within the humanitarian sector.[190] Joanna Macrae outlined the rationale behind it:

"I think what donors came to realise… was that the volume of ODA being spent on humanitarian assistance was rising very, very sharply but there were no norms against which donor performance could be measured… and I think also whole agendas around harmonisation of donor procedures were getting greater currency more generally, so there was a kind of moment then when donors came to the view that it would be useful to have such principles to guide their behaviour both bilaterally and collectively, and by initially incorporating these principles into the DAC peer review process and, more recently, having them agreed as a reference point for DAC members, I think there has been an attempt made to make sure that humanitarian assistance is basically subject to the same level of scrutiny as the main part of development assistance."[191]

The GHD "Principles and Good Practice of Humanitarian Donorship" were endorsed at a conference in Stockholm on 17 June 2003,[192] and there are 23 governments which are now signatories to the process.[193]

126. It is important that the GHD does not remain a northern donor-led initiative but incorporates southern and non-traditional donors. Although DFID's advocacy for GHD is valuable, the Department needs to make sure that it continues to be seen as a sector-wide initiative and not purely as a UK-driven agenda. The GHD process also needs to engage NGOs and civil society who could potentially use GHD as a framework for evaluating donors' work. In future inquiries which touch on humanitarian issues we will consider the extent to which DFID is adhering to the GHD principles it has championed.


162   Ev 128 [DFID]  Back

163   Ev 185[Mr John Scicchitano] Back

164   Ev 154 [British Red Cross] Back

165   Ev 169 [Oxfam] Back

166   DFID response to the Report of the International Development Committee of 30 March 2005, Darfur, Sudan: the responsibility to protect, available online at http://www.dfid.gov.uk/pubs/files/sudan-command.pdf. Back

167   Ev 131 [DFID] Back

168   DFID's response to the OECD DAC Peer Review of the United Kingdom, 11 September 2006, paragraph 36. Back

169   Ev 234 [Womankind Worldwide]; Ev 171 [Oxfam] Back

170   Ev 205 [HelpAge International]  Back

171   Ev 189 [ActionAid international]  Back

172   Q 124 Mr John Mitchell, ALNAP Back

173   Q 134 Mr Nicholas Stockton  Back

174   ICVA talkback 7(3) October 2005 Back

175   Ev 131 [DFID] Back

176   Adinolfi, Bassiouni, Fossum Lauritzsen and Roy Williams, 'Humanitarian Response Review: An independent report commissioned by the United Nations Emergency Relief Coordinator and Under-Secretary-General for Humanitarian Affairs, Office for the Coordination of Humanitarian Affairs (OCHA) August 2005', available online at http://www.reliefweb.int/library/documents/2005/ocha-gen-02sep.pdf. Back

177   ICVA talkback 7(3) October 2005 Back

178   Q 151 Mr John Mitchell Back

179   Ev 174 [Tearfund]  Back

180   Q 124 Mr John Mitchell Back

181   For example Houghton 'Tsunami emergency - lessons from previous natural disasters', (January 2005) ALNAP. Back

182   Q 141 Mr John Mitchell Back

183   Ev 155 [British Red Cross] Back

184   Q 119 Mr Nicholas Stockton Back

185   Ev 175 [Tearfund] Back

186   Q 130 Mr Nicholas Stockton Back

187   OECD, 'United Kingdom: Development Assistance Committee peer review' (2006), p.96, available online athttp://www.oecd.org/dataoecd/54/57/37010997.pdf.  Back

188   DFID's response to the OECD DAC Peer Review of the United Kingdom, 11 September 2006 Back

189   Ev 132 [DFID]  Back

190   Q 74 Mr Eric Stobbaerts; Ev 153 [British Red Cross]; Ev 216 [Merlin] Back

191   Q 109 Ms Joanna Macrae Back

192   Ev 139 [DFID]  Back

193   Q 112 Ms Joanna Macrae Back


 
previous page contents next page

House of Commons home page Parliament home page House of Lords home page search page enquiries index

© Parliamentary copyright 2006
Prepared 2 November 2006