Reflections on theory of change development

It has been a year (already!) since the conference on theory of change (ToC) development organised by the Methods circle of DeGEval, the German Evaluation Society. We met at the impressive (1950s) premises of DeStatis, the Federal Statistical Office in Wiesbaden. It was a massively enjoyable and super well-organised event with insightful contributions about the history and practice of ToC development.

A theory of change helps us explore why and how well-defined things have (or haven’t) happened in an intervention. But beware! It is just a theory – not any ultimate truth.

Apparently, the term ToC was coined in the early 1970s by late Carol Weiss, a key founder of evaluation of social programmes and policies. A ToC can clarify how an intervention (such as a project or programme) is supposed to work – what it is doing, what it is intended to accomplish, and what the likely connections are between actions, their products and wider results. Carol Weiss showed, among other things, that a theory of change helped finding out whether a programme or policy did not work because it was not carried out as planned (implementation failure), or because it was based on faulty thinking on the ways in which the desired change would happen (theory failure).

One can develop a ToC, or a tentative ToC, deductively, using generalisable evidence from research, evaluations and evidence reviews – ideally when planning an intervention. Evaluators often find themselves working inductively only, reconstructing ToCs later in the life of an intervention, when there has been no explicit theory of change in the beginning: We pull together data from the project documentation and facilitate participatory processes to reflect on the intervention, its intended results, and the logical connections between activities and their broader outcomes. In the process, we also develop hypotheses on how things happen in the ToC – that is an abductive activity. Then test that tentative ToC against existing evidence (deduction, again) and data from the intervention (induction).

Apologies for this addictive use of fancy terminology! It makes ToC development sound very complicated. If you do it thoroughly, it is complicated. Anna von Werthern showed, in her conference presentation on unpacking the black box of ToC, how she performed several loops of careful data collection and joint analysis to develop, step by step, a well-founded ToC grounded in evidence. Such iteration is important.

What does a ToC need to be useful? That question guided the discussion concluding the workshop. The consensus was: To be accepted and trusted, a ToC does not need to deliver utter certainty. But it needs to be clear (defining its elements), well presented (with a crisp diagram that focuses on necessary elements), practical (elements that matter most to the stakeholders, and terminology that is familiar to them – using fewer big words than this blog post 😉), and transparent about its limits.

Depending on what an evaluation needs to find out, ToC development or review can be quick and rough, or more thorough. There are only few cases when I would skip it totally in evaluation or planning. Even a lighter-touch ToC development process with a very tentative theory of change helps structure one’s thinking. If forces people to define what they do and what they want to achieve, surfacing assumptions and making it easier to test them.

Evaluation – a waste of money?

The other day a friend told me something slightly shocking. They felt it made no sense anymore to commission external evaluations – because evaluation quality was shoddy and hardly anyone read the reports, anyway.

To see someone draw the consequences from such a sorry state of evaluation is shocking. But it does not surprise me that the quality of their evaluations is poor. So many evaluation TOR are flawed in international cooperation: sky-high expectations (dozens of difficult questions, e.g., on impact directly caused by a very limited intervention in a complex sector, and on long-term sustainability while the project is still running), a tiny timeframe easily counted in weeks, and a budget that is just enough for “two dudes, two weeks, 20 interviews” (that is a quote but I don’t know the source). That kind of set-up makes it very hard to come up with any new or deeper insights for people in the project or programme. Of course, an evaluation team can try to make the best of a bad situation and stimulate joint reflection within the short process, so that there is at least a bit of process use of the evaluation. Or they could compare existing evidence reviews with the project logic or theory of change to hypothesise about potential impact and sustainability. Or, if they are experienced specialists in the project’s subject matter, they could reframe the evaluation as some kind of specialist feedback. It is possible to draw some use from small evaluations. But small evaluations will not provide robust evidence across the six evaluation criteria (relevance, coherence, effectiveness, efficiency, impact, sustainability) used in international development cooperation contexts.

If we need robust evidence, then it may be a good idea to go for bigger evaluations that look at broader sets of interventions in their specific contexts. More use should be made of existing evidence (evidence reviews, academic research…) instead of trying to reinvent the wheel/find out from scratch. We can still have quicker, smaller, advisory-type evaluations to answer a handful of specific, practical questions that support decision-making. We can also inject more evaluative thinking into organisations, for example by developing a culture of internal evaluation or a system of regular smaller reviews.

There are so many possible ways of making evaluation more useful. Evaluation is an opportunity to gain insights that help us make things better. But you are going to miss that opportunity if you frame evaluation as a routine exercise, or as some kind of afterthought. Doing evaluation just because it has to be done is a waste of money indeed.