The Office of Development Effectiveness (ODE) released an evaluation of AusAID’s HIV program in Papua New Guinea in 2012. The report assesses the program between 2006 and 2010. As mentioned in our previous blog post, it paints a rather gloomy picture. What is particularly worrying is the finding that HIV education and prevention activities were largely ineffective. At 20 per cent, these efforts constitute the largest proportion of AusAID funding to the PNG HIV response. The evaluation concludes that the overall effectiveness of AusAID’s $174 million intervention was “less than satisfactory” (p. 104) and that while there was an effective contribution to counselling and testing, other areas, such as education and prevention, were “mostly ineffective.” The executive summary despairs that “there is no evidence that prevention programs are reducing the number of new HIV infections” (p. 104).
After taking a closer look at the evidence in the report, we find ODE’s evaluation to be inadequately based and perhaps overly pessimistic. The evaluation lacks sufficient evidence and explanation to substantiate such a strong claim. Perhaps AusAID deserves a little more credit than is implied by the ODE team.
The lack of improvement in condom usage (p. 42) presents the most convincing argument to support the conclusion of ineffectiveness. The evaluation team supplements quantitative evidence from a 2010 PNG report (the so-called UNGASS report) with qualitative observations. It notes that severe distribution problems resulted in significantly less condom access than intended. Of the the 40 million AusAID-funded condoms, 38 million remained undistributed [pdf, p. 15]. This is significant because condoms had been designated [pdf, p. 32] as one of the main instruments targeted by HIV prevention activities. The evaluation team draws a logical implication that prevention efforts could not have had much effect on HIV prevalence.
However, while there were clearly problems with the project, the evaluators’ arguments for ineffectiveness are weakened by insufficient and unclear explanations, and its credibility undermined by questionable treatment of available data.
First, the authors show in their Table 1 a sizeable drop in the 2009 new infection rate relative to 2007 and 2008, but then claim (on p.42), without substantiation, that this is predominantly due to data improvements. Surprisingly, the evaluation sources its data from a 2010 surveillance report from the National Department of Health (NDOH), which warns [pdf, p. 15] against relying on precisely the numbers cited by the evaluation. The 2010 NDOH report proposes alternative numbers based on a more secure source, but then also warns (pp. 15-16) against the use of the 2007 and 2008 data. The most reliable inference on new infections is in fact, as the 2010 NDOH report notes, that they increased to 2006 and then stabilized, at least in 2009.
With the new infection rate not falling, perhaps the HIV program did not work as well as it should have. But the stabilization, as opposed to continued rise, may indicate that the HIV program was effective to an extent, or at least that it was not as ineffective as the authors conclude. This suggests that it is inappropriate and potentially overly pessimistic to conclude that most education and prevention activities were ineffective.
Second, and in contrast to the claim of the evaluation that “there is no empirical evidence of a fall in new HIV infections” (p. 41), we find an encouraging sign. The 2010 NDOH surveillance report provides evidence that the HIV prevalence amongst pregnant women has fallen (see Figure 10a in that report). The evaluation does not mention this.
Third, the evaluators report that “STI incidence and prevalence appear to remain unchanged or getting worse” (p. 42). However, they do not provide convincing evidence. The report they rely on for this inference (the 2010 UNGASS report) shows increasing total reported sexually transmissible infection (STI) cases. The evaluation authors conclude on this basis that “prevention activities to reduce STI incidence and prevalence are failing miserably” (p. 42). But there may be other explanations. Factors such as improved accessibility to STI facilities or population growth could have increased the total number of patients and therefore also the number with STIs. It is also possible that increased awareness of STI symptoms and consequences may be leading to more affected individuals going to clinics, and to more clinics providing STI services. The lack of clarity on both these issues reduces the credibility of the conclusion of ineffectiveness.
Overall, the ODE evaluation provides a useful overview of the progress of AusAID-funded education and prevention activities. However, the evaluation is less successful at convincing the reader that these efforts were largely ineffective. Looking ahead, data collection and evaluation frameworks must be enhanced to improve the accuracy and reliability of findings. In the meantime, more care should be taken to draw on all available data, and not to draw conclusions beyond what the data will bear.
Cheryl Che and Ruth Tay are studying the Masters in International and Development Economics at the Crawford School of Public Policy, ANU. Stephen Howes is Director of the Development Policy Centre. This combined post is based on the essays Cheryl and Ruth wrote as part of their Aid and Development Policy class, in the second semester of 2013, where students were required to review some aspect of a recent evaluation from the Office of Development Effectiveness. Another essay from the same assignment can be found here. On Friday 21 March, Devpolicy will host a forum to discuss new ODE evaluations on volunteers and aid quality.