A recipe for failure in evaluation

For an evaluation to be useful, it needs to be well prepared, typically in the inception phase: That is the moment when evaluators familiarise themselves with what they are supposed to evaluate (the evaluand). For instance, a kick-off workshops would present the contours of the evaluand and key stakeholders to the evaluators. Then the evaluation team would interview a few people to understand more (exploratory interviews). They would map the different elements of the evaluand (e.g., all the individual projects in a programme) and available data. Based on these first steps, the evaluation team can design the evaluation and develop its data collection tools. There is always some waiting time in that process – the evaluation needs to receive documents, people have to be available for interviews…

You can’t do that within a couple of weeks unless the evaluand is not complicated and those who commission the evaluation have prepared everything beforehand – a matrix showing all the elements of the evaluand and their key features, a stakeholder list, a catalogue and ready-to-use cache of documentation and other available data.

Too often, organisations that commission evaluations allow just 10-20% of the time for inception. As a result, data collection and analysis need to start while the evaluators are still trying to understand the evaluand and working on the questions to ask, on sampling/selecting information sources and so on.

Last week I received a perfect example for terms of reference that set up evaluation teams for failure: One month for inception, three months for data collection and analysis (in the middle of the holiday season, i.e. effectively two months), and six full months for report writing and reviewing – for the evaluation of a facility that funds hundreds of initiatives in a highly diverse lot of countries spread across three continents. The evaluators will have just one month to try to understand the complicated evaluand and their client. That month won’t even be enough to get all the necessary reports on the intervention. The evaluators will scramble to make sense of whatever data they can find and collect at such short notice, and then they’ll have six months to debate with their clients how they should word their findings. Not a good way to generate evidence for decision-making.

It would be so easy to make things more practicable. That evaluation spans 10 months. Why not go for 4 months for inception, 4 for the actual research, and 2 for reporting?