Less is more in evaluation questions

I am republishing this 2019 post because of a recent, heated discussion on a popular evaluation list server. It is about the harmful impact of excessive evaluation questions on evaluation quality.

Writing evaluation terms of reference (TOR) – that is, the document that tells the evaluators what they are supposed to find out – is not a simple exercise. Arguably, the hardest part are the evaluation questions. That section of evaluation TOR tends to grow longer and longer. This is a problem because: Abundant, detailed evaluations questions may lock the evaluator into the perspective of those who have drawn up the TOR, turning the evaluation into an exercise with quite predictable outcomes. That limits learning opportunities for everyone involved.

Imagine you are an evaluator who is developing an offer for an evaluation, or who is working on an inception report. You sit at your table, alone, or with your team mates, and you gaze at the TOR page (or pages) with the evaluation questions. Lists of 30-40 items totalling 60-100 questions are not uncommon. Some questions may be broad – of the type, „how relevant is the intervention in its context“, some extremely specific, for instance, „do the training materials match the trainers‘ skills“. (I am making these up but they are pretty close to real life.) While you are reading, sorting and restructuring the questions, important questions come to your mind that are not on the TOR list. You would really like to look into them. But there are already 70 evaluation questions your client wants to see answered and the client has made it clear they won’t shed a single one. There is only so much one can do within a limited budget and time frame. What will most evaluation teams do? You bury your own ideas and you focus on the client’s questions. You end up carrying out the evaluation within your client’s mental space. That mental space may be rich in knowledge and experience – but still, it represents the client’s perspective. That is an inefficient use of evaluation consultants – especially in the case of external evaluations, which are supposed to shed an independent, objective or at least different light on a project.

Why do organisations come up with those long lists of very specific questions? As an evaluator and an author of meta-evaluations based on hundreds of evaluation reports, I have two hypotheses:

  • Some evaluations are shoddy. Understandably, people in organisations that have experienced sloppy evaluations wish to take some control of the process and they don’t realise that tight control means losing learning opportunities. But! It takes substantial evaluation experience to provide meaningful guidance to evaluators – where evaluation managers have limited experience in the type of evaluation they are commissioning, their efforts to take control can be counter-productive.
  • Many organisations adhere to the very commendable practice of involving many people in TOR preparation – but their evaluation department is shy about filtering and tightening the questions, losing an opportunity to shape them into a coherent, manageable package.

What can we do about it? Those who develop TOR should focus on a small set of central questions they would like to have answered – try to remain within five broad questions and leave the detail to be sorted during the inception phase. Build in time for an inception report, where the evaluators present how they will answer the questions, and what indicators or what guiding questions they’ll use in their research. Read that report carefully to see whether it addresses the important details you are looking for – if it doesn’t and if you still feel certain details are important, then discuss them with the evaluators.

My advice to evaluators is not to surrender too early – some clients will be delighted to be presented with a restructured, clearer set of evaluation questions. If they can’t be convinced to reduce their questions, then try to find an agreement as to which questions should be prioritised, and explain which cannot be answered with a reasonable degree of validity. This may seem banal to some among you – but to tell from many evaluation reports in the international cooperation sector, it doesn’t always happen.

Five tips for remote facilitation

This is a rerun of a blog post I wrote a year before I started running training workshops on online facilitation (with the PME Campus, for example). All of what I wrote then is still valid. Since I have promised I would move some posts from my old blog to this new one, here is the post:

Despite the risks and uncertainties associated with independent consulting, I have never felt as privileged as I do now, living in a country with a highly developed, accessible health system, working from my customary home office, and equipped with a decent internet connection and the hardware needed to stay in touch with friends and colleagues. The crisis has been an opportunity to develop my remote facilitation skills. Before, I facilitated the occasional „real-life“ workshop in a conference room with video equipment, with participants in other locations joining us via Skype or the like. I have shared that type of hybrid experience on the Gender and Evaluation community pages. Now I have gone one step further, facilitating fully remote workshops from my home office. I mean interactive workshops with some 5-20 people producing a plan, a strategic review or other joint piece of work together – not webinars or explanatory videos with hundreds of people huddling around a lecturer who dominates the session. To my delight, virtual facilitation has worked out beautifully in the workshops I have run so far. Good preparation is a key element – as in any workshop. I have distilled a few tips from my recent experience and from the participants‘ feedback.

  • Plan thoroughly and modestly. Three to four hours per workshop day is enough – and there is only so much you can do in half a day. Factor in breaks (at least one per hour), time for people to get into and out of virtual breakout rooms, and at least five minutes per workshop hour for any technical glitches.
  • Try to make sure all participants can see each other’s faces. Some videoconferencing platforms allow you to see dozens of participants on the same screen. If you use a platform that shows only a handful of speakers, try to rotate speakers so that everyone can catch a glimpse of every participant. Apparently, recent research shows that remote meetings are more effective if people see each other. Smile! Keep interacting with your webcam and watch participants‘ faces as carefully as you would if you were in a room with them.
  • Pick facilitation tools that match your participants‘ digital skills. I love software that allows everyone to post „virtual“ sticky notes and move them around on a shared whiteboard. But that’ll work only if all (or a critical mass of) participants like experimenting with web-based tools. If many participants are uncomfortable with collaborative web-based visualisation, then you can record key points on the virtual whiteboard (life or between sessions), or ask participants to send their text contributions to you or your co-facilitator to post them on their behalf. The best way to gauge participants‘ readiness is a technical rehearsal well before the workshop (ideally, at least a week earlier).
  • Share a written technical briefing before the workshop. That should include (i) the links and passwords to the conference and the tools, (ii) guidance as to how to maximise data transmission speed – for instance, by using a LAN cable or by switching off WIFI on all non-necessary devices, by temporarily disabling Windows updates, closing all other computer windows etc., (iii) guidance on troubleshooting in case of major technical problems (e.g. alternative dial-in numbers, persons to contact if a participant fails to get back on-line), and possibly (iv) links to a couple of very short (1-2 minute-) tutorials for any software you may use for web-based joint visualisation or other forms of co-creation.
  • Do your homework. And give homework. If the digital tools you’ll use are new to you, try them out with colleagues and friends before the actual workshop. There is a growing body of video tutorials on the sprawling world of virtual collaboration; check out these resources. I also like quick primer for running online events on Better Evaluation which contains plenty of useful links. Before and in-between workshops, invite participants to try out any tools that are new to them, and/or to continue working on the collaborative virtual whiteboard.

It is generally recommended to work as a tandem, with one facilitator running the workshop and the other one looking after the technical aspects. But if you facilitate only one to two three- to four-hour sessions a week and you type really fast, then you can manage on your own. Be prepared, though, to feel totally exhausted after each session!

Take time when preparing (for) evaluations

In 2012, I published a post with the title above on my former blog. And I still see major evaluations with budgets running into hundreds of thousands of euros that come with a four-week inception phase, or that are supposed to start basically the day after the evaluation firm or evaluator has been selected. That is wasteful, because an evaluation that is not tailored to its users‘ needs risks being… useless.

Ideally, one should start planning evaluations right when the project/programme that is supposed to be evaluated starts. Back in 2012, I recommended to start recruiting evaluation teams at least six months ahead of the field work – at that time, the evaluations I had on my mind were evaluations of individual projects run by civil society organisations (CSOs). With anything bigger or more complicated, I’d plea for much, much more time for finding the evaluation team, briefing it and developing a robust evaluation design with instruments that fit their purpose. But the gist of my 2012 blog is still valid – and I had promised to re-publish a few of my earlier posts. Here it is:

There has been an extraordinary flurry of calls for proposals for external evaluations. This is good news; it suggests that people find it important to evaluate their work. But, upon closer examination, you’ll notice that many calls envisage the evaluations to begin just a couple of weeks after the deadline for offers, and to end within a month or so. That is frustrating for experienced consultants, who tend to be fully booked several months ahead. Narrow time-frames may also make it difficult for those who commission the evaluation to identify sufficiently skilled and experienced candidates. If you take evaluation seriously, then surely you want it to be done in the best possible way with the available resources?

Over the years, I have come to appreciate time as a major element of evaluation quality. Most development organisations (not only CSOs) cannot and do not want to afford full-fledged scientific-quality research, which typically involves plenty of people with advanced academic degrees and several years of research. That is perfectly reasonable: if you need to make programme decisions on the basis of evaluations, you can’t afford to wait for years. (The programme would be over, the context would have changed, your organisation would have changed their priorities, to quote but a few likely problems.) But what one can afford – even on a shoestring budget – is to allow plenty of time for thinking and discussing during the preparatory phase of an evaluation. In that way, you can make sure (among other things):

  • the terms of reference (TOR) express exactly what you need
  • the participating organisations are well-prepared and welcoming (which they are more likely to be if the TOR have been worked out with them and take their wishes into account)
  • the evaluation team understands what they are supposed to evaluate
  • the evaluators can reflect on different options, discuss these with key evaluation stakeholders, and let their thoughts mature over a few weeks before deciding on the final design
  • there is enough time to sample sites & projects/components so as to achieve a maximum of representativeness or a good choice of cases – to avoid visiting only what a Chinese expression calls „fields by the road“
  • data collection tools can be pre-tested, adjusted, those collecting the data trained and so forth

Extra time for these activities does not necessarily mean more consulting-days – just spreading out the days budgeted for, and identifying and finding ways of making better use of existing data in the project, can make a big difference.

Written surveys without writing

Back in 2013 my colleague Wolfgang Stuppert and I carried out a written survey that did not involve any writing! A useful instrument when you work with people with limited literacy. This is a ‚reprint‘ of the post I wrote on my former blog (developblog – now just an archive of posts from 2008 to mid-2021).

The survey was part of an evaluation of services for survivors of violence against women and girls in Mozambique. We felt it was important to gather basic data and feedback from as many women and girls who used the services as possible. But we had only little travel time in Mozambique and no resources to recruit and train dedicated enumerators who would adminster a survey on our behalf. Therefore, we decided to organise a written survey that the clients would fill in themselves. Some users, we were told, could not read and write well enough to fill in a form. Still, virtually anyone could hold a pen and tick off images. That is why we went for the following process:

We wrote up a set of short, simple questions, to be read out by the receptionist or other staff of the service centre to the client, just before the client would leave the centre. The questions were preceded by a straightforward explanation as to how the client would use the answer sheet (pictured below).

Of course we briefed the centre staff as to how to read out the instructions and questions, without paraphrasing or using their own examples, so as to reduce the potential for bias induced by those reading out the questions.

And this is how the clients recorded their answers on the exit poll: Each client received the answer sheet/card with rows of symbols, each row representing the possible answers to one of the questions. Each time the centre staff read out a question, the client would tick off the relevant symbol on the card. Sitting at a distance from the staff, she could hide her response.

At the end, the client would fold up the response card, staple it and insert it into a sealed box.

That process was organised during a couple of months preceding our ‚own‘ field work time in Mozambique. Upon arrival, the boxes were collected; we broke the seals and coded the responses. We did not come across anything that would have suggested ballot-rigging or other tampering by centre staff. And we were very impressed by the large numbers of answers, which generated quite interesting statistics – including some data that helpfully challenged our assumptions about the service users and their experience.

empty

Unclutter your video appearance!

Videoconferences can be too revealing. Not just for people who forget to switch off their camera/microphone during breaks, only to blow their nose trumpet-style or even poke it (believe it or not, I still witness such moments)! Interior decoration or disorder can also be an issue.

I do not like to blur my background (most videoconference programs have an option to do that) because I tend to move and gesture a great deal, especially when facilitating. A blurred background can make limbs disappear when I move, or any objects that I hold. But I still want to have an uncluttered image on video, which is hard to achieve when there are shelves in the office or the desk is a bit messy.

My office holds some artworks that I do not want to have at home – and that I am not too eager to share with everyone I talk to on-screen. The picture to the left shows what the office looks like when I walk into it on a sunny winter day.

Now, the middle picture displays my most uncluttered background for videoconferences, and the right hand one an intermediate option which includes my cheerful yellow door. The latter is a selfie and my arms are quite short; in real videoconferences I tend to be more remote from the screen so that there is more empty space around me – I feel that introduces a nice sense of calm. It also provides a good background for my wild gestures!

This simple trick is to place the videocamera right on top of the (external) computer screen and rotate the screen to the only empty spot on my office walls. As you see on the picture to the left, the position of my desk is already slightly oblique, so that I can look out of the window without getting blinded by the (occasional) sunshine. If you have an external video camera (as opposed to a camera built into your screen), you can experiment with its place on the screen, shifting it more to the left or more to the right. Notebook/laptop users can use (or build, for instance with a cardboard box) a camera stand/tripod, to place the webcam at a comfortable height (eye level or above, unless you want people to peek into your nostrils) and turn it to the place where you have the best background.

Remote evaluation: a new norm

The German evaluation society DeGEval has published a discussion paper (in German) that provides guidance for remote evaluation. It is based on experienced evaluators‘ lessons from more than a year of remote evaluation.

The term „remote evaluation“ refers to an evaluation that is carried out by a person/team based outside of the country/region/place where the project to be evaluated has taken place. Due to travel restrictions linked to the COVID-19 pandemic, many organisations – especially those active in international cooperation – have commissioned remote or semi-remote/ hybrid evaluations. In most of the examples known to me, the team leader or sole evaluator has been working from Europe, conducting interviews, surveys and workshops online and by telephone, with people all over the world. I have carried out a bunch of remote and hybrid evaluations, too, assessing multicountry initiatives in global policy advocacy and local economic empowerment, country programmes of international development players, and regional learning initiatives. It has been an enlightening experience. I can fully subscribe to the conclusion the DeGEval paper reaches: European evaluators don’t need to travel that much.

The working paper balances the challenges of remote evaluation – you don’t get to meet people in person, you don’t visit the places where the project has happened… – against an impressive array of advantages. For example, a remote approach allows you to spread primary data collection over a longer stretch of time, because the evaluators do not need to squeeze all interviews into their two-week field trip. The money you save on travel-related costs can go into enlarging the data collection team. For instance, a colleague has recruited research assistants to carry out phone interviews. That has proven an excellent means to reach many more people, in many more places, than the number of persons the average evaluator can interview during a field trip.

The best thing about the paper is its conclusion. German readers, go to page 35 of the paper and read the last sentence! It encourages those who commission evaluations to carefully examine whether „international evaluators“ really have to travel. In many cases, evaluations can be carried out locally, supporting local consulting firms and research institutions. Where it is considered important to have someone from abroad on the team, a hybrid model may still be a good – and environmentally sound – solution. As much as I enjoy interacting with people in other countries and places: Often, we can make better use of our resources if we skip international travel.

MR-Planeten-SFooter

Semi-audio interviewing

Your interviewee speaks via a mobile phone and their signal is too bad for a proper, two-way (remote) interview. What can you do? Reschedule the conversation? Opt for an ‚e-mail interview‘ instead? A new, hybrid method emerged in an interview I carried out a few days ago, after both videoconferencing via a popular online platform and audio conferencing via the interviewee’s favourite smartphone messenger had failed. I’d call it the ‚written question, spoken answer method‘, or, say, semi-audio interviewing. It is easy and astonishingly effective – as a method of last resort.

Basically, after two-way speaking had failed in that interview, I simply typed my first interview question and asked the interlocutor to respond directly with brief voice messages. Many (or all?) smartphone messengers (Signal, WhatsApp etc.) come with an option to send voice messages. That is less burdensome than typing on the phone screen. Also, the interlocutor’s voice adds nuances that written text can’t capture. The interviewee has your questions in writing, allowing them to focus. You can adjust or develop subsequent questions as the interview progresses. As an added bonus, you can listen to your interlocutor’s messages again after the interview, just like with a classical audio-recorded conversation, but visually structured by your written questions (i.e. easily searchable).

Of course, it is not a real-life real-time conversation. It doesn’t beat a regular two-way video conference, either. The interview progresses slowly, as it takes time for the voice messages to upload (remember, the signal is poor). On the other hand, that allows the interview partners to gather their thoughts – well, unless they are distracted by other incoming messages. But that is always a risk in remote interviewing.

Evaluator’s Dilemma

The expectations evaluators are facing have changed. The resources we get for evaluation have, too – but they still fall short of what it takes to meet new expectations. This week Friday, Ines Freier, Berward Causemann, and I will discuss this dilemma at the annual conference of the German Evaluation Society (DeGEval). The event is in German (see my announcement on my homepage). But I have summarised the dilemma in English as well – see the picture above. Does this ring a bell with my fellow evaluators?

New place, familiar blog

A big hello, and a big thank you for dropping by, to all visitors to my new blog space! Some of you may have followed my developblog.org, which I started in 2008. That was around the time when I decided to go freelance and I felt I needed my own website. It was so hard to decide what the website should look like that in the end I simply started a blog. Now, in my 15th year of independent consulting, I finally have my home website with a new space for my blog. I will continue, at a gentle pace, sharing news and insights from the world of evaluation and facilitation. This space here will replace my old blog. No new items will be added to developblog.org. Before shutting it down completely, I will move and update some favourite articles to this new spaceAdvance thanks for visiting again!