The first Lessons from Australian Aid report: a flawed flagship

The Office of Development Effectiveness (ODE) within the Australian Department of Foreign Affairs and Trade recently released its first Lessons from Australian Aid (pdf) report, in tandem with its 2013 quality review (pdf) of DFAT’s Aid Program Performance Reports, which are annual self-assessments of progress toward the various goals articulated in country (or sometimes thematic) strategies.

I provided a commentary on these reports at the recent Development Policy Centre evaluation forum, which is reproduced in condensed form below. The forum also considered an ODE ‘strategic’ evaluation of the Australian Volunteers for International Development program, on which Stephen Howes will comment separately in a subsequent three-part post.

I’ll say just a little about the 2013 quality review of Aid Program Performance Reports as it’s less significant than the Lessons from Australian Aid report. The annual quality review is a case of one mature process reviewing another. So, not surprisingly, the 2013 report is a high-quality product. Aid Program Performance Reports have been around since 2008 and are considered, particularly by program managers, to be a very useful tool for reviewing aid program performance at the country level (for a good example, see the 2012-13 Timor-Leste report). The ODE quality review criteria, reflected in five ‘cornerstone’ questions, are well-honed and appropriate.

The 2013 quality review includes some interesting observations. For example, it finds that programs run by government departments other than DFAT (or really AusAID in the relevant time-window) don’t get the same thorough, frank treatment in performance reports as programs run by DFAT itself. And it finds that, surprisingly, performance reports generally don’t figure in policy dialogue and are under-utilised by senior management. It’s tempting to speculate that there’s a trade-off between quality and utilisation, such that the fuller integration of these performance reports into aid management systems might actually reduce their specificity, frankness and usefulness as local management tools.

These quality reviews would be more valuable, and presumably easier to undertake, if country program objectives were better and more consistently framed (on which see more below). And the 2013 review’s meditations on the major drivers of good program performance are relatively superficial and not very convincing. But these are not big criticisms. In fact it probably isn’t a good idea to attempt to draw over-arching lessons from country performance reports, including lessons about the appropriateness of program objectives, in a document that is primarily a process quality review. In future, that’s a job for the Lessons from Australian Aid series.

Which brings me to the main attraction. Lessons from Australian Aid, in concept, is a creature of the Independent Evaluation Committee (IEC) which was established in 2012 in response to a recommendation of the 2011 Independent Review of Aid Effectiveness. As was made clear during the evaluation forum by IEC chair Jim Adams, it is envisaged that Lessons from Australian Aid will become ODE’s flagship publication and the primary vehicle for communicating evidence on the strengths and shortcomings of Australia’s aid.

The IEC is right that the aid program needs a credible, annual flagship publication that communicates lessons from independent evaluation to decision-makers, accountability institutions and the general public. It should sit above the ruck of program-specific or thematic evaluations. It should not merely synthesize whatever such evaluations may have been done in a given year or get bogged down in assessments of process. Its role should be to communicate how well Australia’s aid resources have been deployed, in aggregate, to achieve development outcomes and impacts. There has, until now, been nothing like this.

The first Lessons from Australian Aid report, however, represents a process still in the early stages of construction. While it makes good use of information from Aid Program Performance Reports, which in many cases are very frank, thorough and credible documents, it is deficient in five respects. It fails to draw upon all available information sources. It trailed rather than informed the Annual Review of Aid Effectiveness for the period in question. It’s a synthesis of findings on ODE’s chosen topics rather than a synthesis of lessons learned from Australian aid. It focuses excessively on perceptions of the quality of the active portfolio, and not sufficiently on evidence about the quality of completed activities. And, even with respect to the active portfolio, it gives insufficient attention to the appropriateness of program objectives. I expand below on each of these points.

Findings of operational evaluations were not taken into account

Lessons from Australian Aid draws upon three sources: various strategic evaluations conducted by ODE from early 2011 to mid-2012, activity-level ‘quality-at-implementation’ reports and the annual quality reviews of Aid Program Performance Reports. It doesn’t draw upon what ODE calls ‘operational’ evaluations—that is, evaluations commissioned by geographic and sectoral areas of DFAT, including but not limited to independent evaluations undertaken upon project completion. There were some 86 of these operational evaluations in 2012. Nor does it take into account performance information on partnerships with multilateral organisations or non-government organisations—the systems for gathering such information are new and under development, respectively. At least we learn at the very end of the report that such information will be included in the 2014 and subsequent lessons-learned reports. We learn also that a review of operational evaluations is underway, looking at those completed in 2012.

The lessons-learned report did not inform the Annual Review of Aid Effectiveness

Even allowing that program-wide lessons-learned reporting is a new process which in this first year overlaps with other, pre-existing ones, it’s hard to understand why the first lessons-learned report was released after the 2012-13 Annual Review of Aid Effectiveness and before the review of 2012 operational evaluations. Logically, the review of operational evaluations would have been completed as a fundamental input to the lessons-learned report, which in turn would have been a fundamental input to the Annual Review of Aid Effectiveness. The actual sequencing was precisely the opposite: the Annual Review of Aid Effectiveness came first, the lessons-learned report second and the review of operational evaluations a distant and as yet unseen third. Let’s hope it’s the other way around next time.

ODE found what it went looking for

Now let’s look briefly at the actual lessons learned, which will knock nobody’s socks off. Three things are highlighted as major drivers of good program performance: engaging in effective policy dialogue, ‘harnessing’ the strengths of civil society and the private sector, and supporting institutional rather than individual capacity building. The first two of these three lessons happen to relate to the topics of recent ODE strategic evaluations. The third, on capacity building, serves to sweep together various other strategic evaluations. Thus it looks rather as if ODE was either very prescient in its choice of strategic evaluation topics, or inclined to find the major drivers of good program performance wherever it happened to be looking. The latter explanation seems the more likely.

If this seems unfair, it’s worth looking at the thorough review of Independent Completion Reports and other operational evaluations which was commissioned by the Independent Review of Aid Effectiveness in 2011. This produced some of the same findings but embedded them in broader and intuitively more credible lessons, such as that ‘well-contextualised design with strong ownership and leadership by partner governments, and intelligent, analytical and responsive implementation … were the principal … drivers of effectiveness’ (p. iii). That review also found a ‘close correlation between greater use of government systems … and sustainability’, a point on which the ODE report is silent—except when it rather implausibly suggests that consistently low ratings for activities with respect to monitoring and evaluation are caused by reliance on partner government systems, which is hardly prevalent in the Australian aid program.

It’s clearly odd that a lessons-learned report would have essentially nothing to say about alignment with partner governments’ strategies, flexibility in implementation and the use of partner government systems. In other words, Lessons from Australian Aid is something of a misnomer for this first report; it’s more a synthesis of recent strategic evaluation findings.

Keeping things on track wins out over lesson-learning

A bigger problem with this first lessons-learned report is that it is based very heavily on aid managers’ perceptions of how their activities or programs are going, or on ODE’s assessments of how well-founded those perceptions are. As noted above, the findings of strategic evaluations do play a substantial part but those evaluations aggregate large numbers of activities and also tend to dwell heavily on activities in implementation, rather than completed activities. There is a striking absence of interest in the outcomes and impacts of completed activities. This is reflected, for example in the benchmark previously used in the AusAID Annual Report for activity quality: 75 per cent of active programs are expected to have satisfactory quality-at-implementation ratings. The World Bank’s corporate scorecard, by contrast, has a 75 per cent target not for active programs but for completed ones (supported by its concessional financing arm). This is both more meaningful and more demanding, given the well-documented upward bias in managers’ quality ratings for activities at most stages of implementation.

The planned inclusion in future years of information from operational evaluations, including independent completion reports, will be a positive step but will not necessarily change the balance as much as it should toward assessing the impacts of completed activities. For as long as ODE is expected to play such a large part in helping DFAT keep current activities on track, it will be less able to draw useful lessons about what has worked in the past and what might work in the future. Certainly there is a role for ODE in meta-level quality assurance—that is, quality assuring DFAT’s aid management systems, including its quality assurance systems, through spot checks, quality reviews of Aid Program Performance Reports and other means. But ODE’s flagship publication should focus on impacts and lessons learned, not on reporting on the perceived quality of the active portfolio. The latter task should be left to the Annual Review of Aid Effectiveness, which should be prepared, as it so far has been, outside ODE.

Country program objectives, not just activity ratings, need ODE’s scrutiny

Looking at the quality reviews of Aid Program Performance Reports for the last couple of years, it seems that around half of the 180 or so objectives of our bilateral aid programs—each of which will typically have 4-6 such objectives—are on track to be achieved. It’s notable that at activity level DFAT’s expectations are much higher than this. It wants 75 per cent of activities, or more, to be satisfactory across a range of dimensions and it seems Australia’s aid program generally exceeds this target. But once you sweep activities under broader program objectives, only about half of Australia’s efforts are satisfactory. This could mean various things. Specifically, a majority of good activities might be concentrated under a minority of objectives, or country program objectives could be too ambitious or badly framed. There’s no paradox here, but this contrast between the two measures of program performance is something to be explained.

Part of the answer must be that country program objectives are in fact too ambitious or poorly framed. It’s instructive to look at the actual list of such objectives provided as an annex to the 2013 quality review of Aid Program Performance Reports, and to compare the Indonesia program’s very clear and specific objectives with just about everybody else’s. If findings from these performance reports, and from ODE’s quality reviews of them, are to be useful inputs for a lessons-learned exercise, program objectives need to be sharpened up considerably. The emphasis should not simply be on improving activity-level quality-at-implementation ratings.

So, overall, the lessons-learned report has a long way to go. That is to some extent acknowledged by ODE in the 2013 report. Future such reports will need to incorporate performance information from the full range of performance information systems, and become less susceptible to seeing only the lessons that ODE goes looking for. They should place much more emphasis on lessons from completed activities, and on portfolio quality as measured with respect to completed activities. They should leave reporting on perceptions of the quality of the active portfolio to the Annual Review of Aid Effectiveness, led by the aid operations arm of DFAT. And they should give greater attention to both the quality of country program objectives and progress against those objectives, with the aim of closing the apparent quality disconnect between the activity level and the country program level.

The first Lessons from Australian Aid might be more prototype than flagship, but the launching of the series to which it belongs is very much to be welcomed. It’s to be hoped that its new publishing house, DFAT, perceives the value of a strategic and credible (therefore not always complimentary) assessment of what has actually been achieved with Australian aid. Both achievements and failings are at present very poorly captured (hence our Australian Aid Stories project). Put in plain sight, achievements and failings are of little interest to tabloids but of much interest to those who might repeat them.

Robin Davies is Associate Director of the Development Policy Centre.

image_pdfDownload PDF

Robin Davies

Robin Davies is an Honorary Professor at the ANU's Crawford School of Public Policy and an editor of the Devpolicy Blog. He headed the Indo-Pacific Centre for Health Security and later the Global Health Division at Australia's Department of Foreign Affairs and Trade (DFAT) from 2017 until early 2023 and worked in senior roles at AusAID until 2012, with postings in Paris and Jakarta. From 2013 to 2017, he was the Associate Director of the Development Policy Centre.

Leave a Comment