Monitoring and evaluation for adaptive programming

Credit: waSHMEL project
Credit: waSHMEL project

Six years ago we wrote a paper called ‘The evaluation of politics and the politics of evaluation’, which argued that programs seeking to work in a politically smart manner faced particular challenges in monitoring and evaluating their performance. This included the need to take into account the long-term, non-linear nature of these initiatives, as well as the recognition of the complex, dynamic nature of the contexts in which they were working. These are many of the same challenges faced by programs seeking to work adaptively.

We argued that there were a number of evaluation and research methods which, when judiciously combined, were better able to answer the kinds of evaluative questions these programs asked. Yet the inherently political nature of evaluation and the ‘results agenda’ tended to favour certain methods – those that captured and quantified short-term tangible outputs, and downplayed those that potentially are better able to assess long-term, subtle changes in relationships, power and social norms (something others, such as Eyben, Natsios, Vogel, Honig, Yanguas and Mueller have also suggested).

Since we wrote that piece, and a follow up paper on what some programs were doing at the time, a lot more programs have sought to work adaptively and in politically smarter ways. Lisa Denney’s first blog in this three-part series captures some of the lessons this has generated. In this blog we set out six take-aways focused on what is being tried and learnt in setting up monitoring and evaluation (M&E) frameworks for adaptive programs, including about working in ways that recognise the political nature of evidence and evaluation.

  1. Be clear about the purposes of M&E

Designing these processes depends on what you want to do. Too often M&E frameworks are designed by technical specialists in ways that either exclude key stakeholders and decision makers, and/or which result in technically robust products that are too complex, expensive or time consuming to be implemented. Furthermore they can also conflate monitoring, evaluation and learning in unhelpful ways. All of which often results in lots of data being collected but rarely used. There is a growing call for ‘right-fit’ systems which are clear about the different purposes of monitoring, evaluation and learning, and more strategic about the collection and use of data to meet these different purposes.

  1. More complex programs require different but accessible M&E

For programs seeking to address complex and political environments, the various stakeholders, interactions and unplanned elements mean that traditional M&E – based essentially on tracking cause and effect – does not work. Practitioners who persist in applying traditional approaches end up either with long lists of indicators and questions (trying desperately to cover the whole story) or very high-level or generic measures, which provide no basis for informed decisions about why and how to adapt.

Programs seeking to work adaptively should utilise multiple data collection and analysis methods to illuminate the different aspects of the work. The aim is to provide useful information about how a program is operating, what contributions are being made to change, and the impact and relevance of this contribution. Often that means different systems for assessing program activities and program contributions to change and additional assessment about context – how that might also be changing, and the implications for the program.

This more comprehensive approach is not fully captured by a logframe or performance framework, nor can it be contained to the M&E team alone. It should be part of the fabric of the program itself. It requires thought and considerable engagement with the program implementers and others who can identify the critical factors to be tracked. The M&E therefore has to be comprehensible and accessible. As was clear at a recent Asia Foundation workshop in Manila involving M&E staff from several adaptive programs, simplifying processes, language and data collection was a common strategy. This included using telephone apps or Facebook to collect real time data from front line staff, and avoiding the often alienating language of M&E, instead using local or plain English language.

  1. Focus on processes and relationships as much as outputs and outcomes

It is recognised that human systems function on the basis of relationships, and it is these interactions that lie at the heart of social change. Furthermore, it is changes in relational practice, i.e. how people, organisations and states relate to each other, that can lead to new ‘rules of the game’, or institutions. However what those institutions will look like is not predictable. This raises challenges for M&E in adaptive programs about how and when to measure changes in relationships and what this leads to.

Adaptive programs are adopting a range of ways to try to address this challenge, including:

  1. Resource the M&E and L

M&E to support dynamic programming does not start and finish with a framework and set of systems. It involves ongoing data collection and analysis processes, ideally engaging multiple stakeholders. This requires resources, planning and management attention. M&E has to be part of the work plans for program implementation. It requires scheduling and planning. Significantly, M&E for adaptive programs requires the whole team to take some responsibility for the analysis and review of the data collected and what it means for ongoing work.

  1. Be politically smart

Once it is recognised that monitoring, and particularly evaluation, is as political as anything else, and often needs to adapt to shifting demands, then many of the tools of adaptive programming and everyday political analysis become equally applicable. These might include:

  • using everyday political economy analysis to better understand the interests, incentives and power relations of the stakeholders involved in an evaluation;
  • being clearer about the multi-level theory of change and need for stakeholder engagement, which makes it more likely that M&E processes actually lead to better learning and adaptation;
  • exploring whether there are coalitions, allies or networks which can combine their experience, skills and data to provide a more powerful and complete picture of change, as well as allowing multiple voices to be heard; and
  • exploring framing and narratives which help communicate evidence but are also politically savvy. As many have argued, populism is not going to be beaten with spreadsheets.
  1. And finally

M&E for adaptive programs ought to also adapt and improve. There are few examples of a perfect M&E system for politically informed adaptive programs. The M&E has to build in its own processes of review, analysis and improvement in order to both grow with the program and stay relevant to its inquiry and assessment needs.

This is the second in a three-part series on adaptive aid programming. Part one looked at adaptive implementation, and part three at the role of research and learning. A pdf of all three posts can be found here.

image_pdfDownload PDF

Chris Roche

Chris Roche is Director of the Institute for Human Security and Social Change, and Associate Professor at La Trobe University and a Senior Research Partner of the Developmental Leadership Program. Chris has worked for International NGOs for nearly 30 years, and has a particular interest in understanding the practice of social change and how it might be best catalysed and supported.

Linda Kelly

Linda Kelly has worked widely in international development and has held senior management positions in Australian based international NGOs (World Vision and Oxfam Australia). Since 2001 she has been the Director of Praxis Consultants Pty Ltd Australia, and in 2014 she joined the team at the Institute for Human Security and Social Change at Latrobe university, working on the interface between research and practice.

8 Comments

  • Interesting! My take aways: collect the data that is necessary for the program; ensure a role for stakeholders and implementers in the data collection; and assess the context and its implications for the program.

    Thank you

  • An interesting read and timely as we are in the process of revising input, outputs outcomes in a logframe of a political and technical project following a Mid Term Review. Linda Kelly can related to this. Will share this article with partners

  • Very interesting article. M&E and L is an integral part of programming. Adaptive programming just took it further, and try to make M&E and L more realistic and relevant, I believe. With multiple sources of data collection and analysis, and at various points and levels, should provide an holistic picture of the change story, and value for money- something donor programs are pressed to account for in an adaptive and complex environment. Also agree that Logframe approach is limiting in evaluating human interactions and social change, in which ‘context’ plays an important role, and use and application of, ‘thinking and working politically’, and political economy analysis, among others are necessary.

  • Thanks very much Chris and Linda for a great read. I was interested to see that ‘M&E’ was discussed in a separate blog to ‘L’. Along this vein, and in your experience/s, what can incentivise donors and providers to better link the evaluative techniques you outline to learning processes and (ideally) real-time adjustments in programming (i.e. activities, budgets, choice of partners etc etc)?

  • An interesting blog. This could end up very very complex which most countries would be unable to sustain without extensive consultancy support – which is dangerous

    • Totally agree with you Ian, that is why we point to attempts to, and the need to, simplify language, processes and data collection. At the same we also need to recognise that there needs to be adequate support for, and resourcing of, local actors, researchers and agencies in helping to answer some difficult questions.

Leave a Comment