Handle with care: results from the 2014 Corruption Perceptions Index
By Grant Walton
Transparency International (TI) has just launched its 2014 Corruption Perceptions Index (CPI). As I explain below we need to be careful in interpreting these results, but here are some of the key findings, particularly as they relate to the Pacific region.
As in 2013, Denmark topped the ranking, with a score of 92 out of 100 (a score of 100 means the country is perceived to be totally clean, 0 is totally corrupt), while the Kiwis again came in second place (scoring 91/100). In a replay of international rugby results, Australia was well beaten by New Zealand, landing in 11th spot with a score of 80 out of 100. Papua New Guinea came 145th out of 175 countries, scoring 25 for the third year in a row. PNG was in the vicinity of Ukraine and Uganda – these countries shared 142nd spot. Timor-Leste came in at 133rd position, scoring 28. The country shares that score with Nicaragua and Madagascar.
Aside from these countries no other Pacific nations are included. As in previous years the Pacific is poorly represented in this index.
While the release of the CPI – and the scores of countries included – is sure to attract much media attention, caution is needed in interpreting the results. For a start the CPI is a measure of elite perceptions – of business people, media and other experts – on the levels of public sector corruption in these countries. Ordinary citizens don’t have a say (there are other instruments that highlight citizen voices, such as the global and regional barometers – see this example). It also leaves out private sector corruption, overlooking a key element (as this publication highlights).
The methodology of the CPI has been criticised by academics for years. Alex Cobham, Research Fellow at the Centre for Global Development, argues that because it distorts the nature of corruption and policy responses the CPI should be scrapped. Cobham suggests that Johann Graf Lambsdorff, the creator of the index, called for an end to the CPI when he walked away from being involved in calculating it in 2009.
Until 2012 the way that the CPI was designed meant that country scores could not be compared across time. So there was really no way of robustly comparing corruption trends in, say, PNG between 2009 and 2010. Of course this did not stop media and other commentators from doing so. In 2012, TI reviewed the CPI and changed its methodology. The changes, they suggest, allow comparison over time. Matthew Stephenson, Professor of Law at the Harvard Law School, argues that the changes have been beneficial, but urges caution due to concerns about the nature of the underlying data sources[i]. He also warns that comparisons over time should not be made unless they are statistically significant.
The CPI has its defenders. It has undoubtedly helped to increase awareness about and action against corruption around the world. For some this is the CPI’s saving grace, and a reason it should continue. There are others who point to a number of surveys that show correlation between incidents of corruption and CPI rankings. This, they suggest, means that the index is a good proxy for actual corruption (it is worth noting however that other studies challenge this view).
In sum, the value of the CPI is hotly contested. So, when reading through this year’s incarnation, take care. Interpreting this index is not as straightforward as it might first appear.
[i] The methodology accompanying this year’s CPI notes that the 2014 index is calculated using 12 different data sources from 11 different institutions. 13 sources were used in 2013. It states that “for a country or territory to be included in the CPI, a minimum of three sources must assess that country”.