Delivering food aid to PNG, 1997 (DFAT/Flickr/CC BY 2.0)

Delivering food aid to PNG, 1997 (DFAT/Flickr/CC BY 2.0)

Aid evaluations: an integration success story

By Stephen Howes
1 February 2018

In November 2016, the Department of Foreign Affairs and Trade (DFAT) released a new aid evaluation policy. In February 2017, the Office of Development Effectiveness (ODE), which is responsible for evaluation within DFAT, released the aid evaluation plan for 2017. It promised 46 evaluations in 2017 across the department: seven by ODE, and 39 by other, operational parts of the organisation. I thought it was ambitious at the time, and we went back early this year to have a look at what was achieved.

34 of the 46 aid evaluations have been completed and published (somewhere on the DFAT website, but unfortunately not necessarily listed on this potentially handy database of evaluations). That’s not a bad effort: 74%.

In the meantime, I’m informed, ODE has modified its 2017 aid evaluation plan, dropping a few evaluations that were not going to be completed on time, and adding a few others. According to the revised plan, the ratio of actual to target evaluations looks even better: 95%.

Either way, it’s a pretty impressive outcome.

ODE itself is an integration success story. It wasn’t abolished when transferred from AusAID to DFAT. It wasn’t merged with some other unit. It was largely left alone. And it has become more productive, publishing an average of six evaluations per year post-integration, compared to only 2.3 before.

It isn’t just ODE; the broader evaluation effort across the aid program has also improved since integration. The 2011 Independent Review of Aid Effectiveness (in which I participated) found that only about one-quarter of the evaluations that were meant to have been done over the last five years had been completed; and that of the completed evaluations, only two-thirds could be found (!), and only one-fifth had been published.

To go from rampant non-compliance to substantial compliance is an achievement. The problems raised in the Independent Review were never fixed by AusAID. One reason for the recent improvement is that DFAT has been more realistic on aid performance regulations than AusAID ever was. AusAID never accepted the Independent Review’s suggestion that only a small number of interventions should be subject to mandatory evaluation (under ODE oversight). It would never let go of the rule that every aid activity should have its own aid evaluation. DFAT’s belated adoption of the Review’s recommendation might seem like a watering down, but a more realistic regulation complied with is more effective than an unrealistic one ignored.

Of course, quantity is only one indicator of the success of evaluation. There is also the harder-to-judge question of quality. Usefully, ODE also undertakes reviews of the operational evaluations, most recently in 2016. One oft-heard criticism of DFAT evaluations generally is that they go in too softly. While that will always be a problem for evaluations that are not fully independent, there is also, from a quick and selective read, some fairly frank feedback from the body of evaluations recently published. Take the evaluation of Australia’s humanitarian response to Papua New Guinea’s 2015-16 drought. This intervention gets rated “less than adequate quality” from a “community” or beneficiary perspective. The evaluation notes that “[t]he assistance received by affected communities was generally very late, generally excluded food, and the rice that was airlifted by Australia was only enough cereal for an average 11 days per person.”  There are also plenty of useful lessons to be learnt from this evaluation to prepare for PNG’s next drought, starting with the conclusion that DFAT’s planning for the 2015-16 drought was “less than adequate quality.”

Partly in response to this perception that the evaluations are too soft, there have from time to time been discussions around making ODE independent. In October 2015, Tanya Plibersek, who was then Labor’s Shadow Minister for Foreign Affairs, announced that Labor would “legislate for transparency and accountability to improve aid effectiveness”, including “for the independent evaluation of the effectiveness of the aid program.” In his 2016 speech to the Australian Council for International Development (ACFID), Richard Moore, former senior aid official, recommended moving ODE to Prime Minister and Cabinet to ensure “more independent external scrutiny” of aid.

Independence is an intrinsic virtue for evaluation, but it would bring costs as well as benefits. One cost of taking ODE out of DFAT is that you would immediately lose its oversight and encouragement of evaluations within the organisation. ODE would go from evaluation champion to threat.

The UK has gone down the independent evaluation route. David Cameron created the Independent Commission for Aid Impact (ICAI) as part of his commitment to hitting the 0.7% target for aid. In 2016, my colleague Ashlee Betteridge and I talked to a number of UK DFID and ICAI staff. We came away with the sense that the UK model had a number of benefits, but not decisive ones.

Overall I would suggest building on what is working. ODE was established in 2006. It has emerged as a champion of evaluation within the aid program. I’m in favour of aid legislation. But, as my colleagues Robin Davies and Camilla Burkot argued in 2016, the purpose of aid legislation should be to mandate evaluation and transparency for Australia’s aid program, not to take ODE out of DFAT.

With thanks to Sachini Muller for research assistance, and to Ashlee Betteridge for her earlier work on ODE evaluations.

About the author/s

Stephen Howes
Stephen Howes is Director of the Development Policy Centre and Professor of Economics at the Crawford School of Public Policy at The Australian National University.

Page 1 of 1