Less is more in evaluation questions

I am republishing this 2019 post because of a recent, heated discussion on a popular evaluation list server. It is about the harmful impact of excessive evaluation questions on evaluation quality.

Writing evaluation terms of reference (TOR) – that is, the document that tells the evaluators what they are supposed to find out – is not a simple exercise. Arguably, the hardest part are the evaluation questions. That section of evaluation TOR tends to grow longer and longer. This is a problem because: Abundant, detailed evaluations questions may lock the evaluator into the perspective of those who have drawn up the TOR, turning the evaluation into an exercise with quite predictable outcomes. That limits learning opportunities for everyone involved.

Imagine you are an evaluator who is developing an offer for an evaluation, or who is working on an inception report. You sit at your table, alone, or with your team mates, and you gaze at the TOR page (or pages) with the evaluation questions. Lists of 30-40 items totalling 60-100 questions are not uncommon. Some questions may be broad – of the type, „how relevant is the intervention in its context“, some extremely specific, for instance, „do the training materials match the trainers‘ skills“. (I am making these up but they are pretty close to real life.) While you are reading, sorting and restructuring the questions, important questions come to your mind that are not on the TOR list. You would really like to look into them. But there are already 70 evaluation questions your client wants to see answered and the client has made it clear they won’t shed a single one. There is only so much one can do within a limited budget and time frame. What will most evaluation teams do? You bury your own ideas and you focus on the client’s questions. You end up carrying out the evaluation within your client’s mental space. That mental space may be rich in knowledge and experience – but still, it represents the client’s perspective. That is an inefficient use of evaluation consultants – especially in the case of external evaluations, which are supposed to shed an independent, objective or at least different light on a project.

Why do organisations come up with those long lists of very specific questions? As an evaluator and an author of meta-evaluations based on hundreds of evaluation reports, I have two hypotheses:

  • Some evaluations are shoddy. Understandably, people in organisations that have experienced sloppy evaluations wish to take some control of the process and they don’t realise that tight control means losing learning opportunities. But! It takes substantial evaluation experience to provide meaningful guidance to evaluators – where evaluation managers have limited experience in the type of evaluation they are commissioning, their efforts to take control can be counter-productive.
  • Many organisations adhere to the very commendable practice of involving many people in TOR preparation – but their evaluation department is shy about filtering and tightening the questions, losing an opportunity to shape them into a coherent, manageable package.

What can we do about it? Those who develop TOR should focus on a small set of central questions they would like to have answered – try to remain within five broad questions and leave the detail to be sorted during the inception phase. Build in time for an inception report, where the evaluators present how they will answer the questions, and what indicators or what guiding questions they’ll use in their research. Read that report carefully to see whether it addresses the important details you are looking for – if it doesn’t and if you still feel certain details are important, then discuss them with the evaluators.

My advice to evaluators is not to surrender too early – some clients will be delighted to be presented with a restructured, clearer set of evaluation questions. If they can’t be convinced to reduce their questions, then try to find an agreement as to which questions should be prioritised, and explain which cannot be answered with a reasonable degree of validity. This may seem banal to some among you – but to tell from many evaluation reports in the international cooperation sector, it doesn’t always happen.

Add a Comment

Your email address will not be published. Required fields are marked *