Wired: Why abundant evidence won’t inevitably lead to abundance

I have an embarrassing confession to make: I only started watching the TV show the Wire midway through last year. While most of my friends and, no doubt quite a few Development Policy readers, have been aware of this television miracle for years, I completely missed it the first time round. This means that not only have I deprived myself of hours of great drama but I also missed the development lessons that the programme is so rich in.

It may seem strange that what is ostensibly a police drama set in Baltimore could contain anything of interest to someone working in aid and development. But the show is full of relevant stuff, ranging from examples of the law of unintended consequences to a detailed depiction of a corrupt and clientelist polity. The squalor of the slums of Baltimore is a powerful reminder of the fact that most of the world’s developed countries are home to acute poverty amongst their wealth. The sub-plots to do with the boxing gym, the unions and the churches will all feel familiar to anyone who’s participated in civil society related aid work. And throughout the show there are great examples of the complicated interactions that occur between formal and informal institutions.

Most development-relevant of all, I think, is the starring role played by statistical evidence in contributing to the problems of urban Baltimore. Specifically, it’s crime stats (especially those related to murder) and test results in schools which wreak havoc. Body counts and exam results are easily quantified and easily reported. Which means that these numbers matter politically. And that means that throughout the show pressure is exerted to keep the body count low and the test scores up.

If this doesn’t sound like a bad thing to you, then you really need to watch the Wire. Because it turns out that the best way to keep crime figures low isn’t necessarily to capture criminal masterminds. And it turns out that the best way to improve test results isn’t always to educate children. In Baltimore evidence of improvement is at least part of the problem. Which is something we development folks could do well to ponder as we enter our own age of evidence.

Not only is aid data becoming more abundant to us but there is also an increasing push to show that aid works. There’s pressure to more rigorously evaluate aid programmes, and a new(ish) range of tools, including impact evaluations and randomised control trials (RCTs), is available to those who want to gather evidence to inform aid work.

This is genuinely good news: in the past too much aid has been given guided solely by ideology or ideas about what ought to work. And too often this money has been wasted as ill-founded beliefs have come to grief in the complicated world of developing countries. So an emphasis on evidence based aid policy coupled with an emphasis on transparency is an improvement. But like everything else in development, it isn’t guaranteed to help the poor. There are a number of things that could go wrong.

First, certain aid activities are much more amenable to the quantitative evaluation approaches currently du jour than others (for example, small health projects suit RCTs well while large-scale work with governments often doesn’t, nor does social change work with civil society groups). This means there’s a risk that, if we’re not careful, we may end up privileging aid work which we can evaluate effectively over aid work which is actually effective.

Second, statistics are, at the end of the day, human creations and when the pressure comes on people to send statistics in a particular direction, people will likely get creative in how they do this, with the creativity too often being expressed in the way data is reported and gathered, rather than in the actual solving of problems. In the world of the Wire this is known as ‘juking the stats’ and it’s everywhere. Hopefully, the aid world won’t ever end up quite that bad, but if the last two decades of econometrics have taught us anything it is that people will go to extraordinary lengths to muster data to their cause.

Third, in development as in the Wire, sitting atop the world of practice is the world of politics. In the Wire this sees all sorts of counter-productive decisions made to protect political capital. In development it brings a number of issues, first and foremost being the impediments it offers to admitting you got it wrong. Development is difficult, and an essential component of learning is being open about your mistakes. But if being open inevitably means a one way ticket to a hiding in the media don’t be surprised if the current age of evidence in aid leads to less openness rather than more.  Or to juked stats, or to projects being selected because they’re likely to produce the sort of evidence necessary to claim success.

Just to be clear, in saying all this I’m not saying that a focus on evidence is a bad thing. Nor am I saying we should give particular types of activities a free ride simply because they’re hard to evaluate. What I am saying though is that gathering evidence is like everything else in aid work: it’s difficult to do well, and even with the best of intentions it can make things worse instead of better.

Terence Wood is a PhD student at ANU. Prior to commencing study he worked for the New Zealand government aid programme.

 

 

image_pdfDownload PDF

Terence Wood

Terence Wood is a research fellow at the Development Policy Centre. His research focuses on political governance in Western Melanesia, and Australian and New Zealand aid.

Leave a Comment