grafiken buch-31

Good intentions in need of appropriate resourcing: OECD guidance on human rights and gender in evaluation

The Organisation for Economic Co-operation and Development (OECD) has issued a guidance document on Applying a Human Rights and Gender Equality Lens to the OECD Evaluation Criteria. It is wonderful to have this new resource – but a few shortcomings may make it hard to apply the guidance in real life.

First, the good things! About half of the 60-page volume is dedicated to explaining how human rights and gender equality (HRGE) considerations can be integrated (mainstreamed) into each of the six OECD criteria for evaluation in international cooperation (relevance, coherence, effectiveness, efficiency, impact, sustainability). There are inspiring ideas: For instance, the volume invites evaluators to assess internal coherence of the intervention under evaluation (the evaluand) by checking whether it is aligned with human rights treaties and related policies. Also, the publication includes helpful definitions and great examples from real evaluations, as well as sample evaluation questions. It is written in a style that is accessible to evaluation specialists (not quite plain English, but not too jargony). And it points to plenty of useful references.

The document encourages evaluators to apply a human rights and gender equality lens to all evaluation criteria. However, there is no discussion of the RESOURCES needed for meaningful implementation of the guidance. In that way, OECD risks encouraging tokenist tick-boxing/flag-waving motions instead of serious consideration of HRGE. For example, OECD invites evaluators to reconstruct the evaluand’s theory of change with special attention to HRGE, to detect intended and unintended HRGE effects on various groups of people. That can work if the project focuses on HRGE, but if it doesn’t, it takes extra time and expertise to add the HRGE dimensions. Also, the document commendably advocates for systematic participation of a wide spectrum of rights holders in the evaluation process – not just as data sources. To make this possible, people need to be reached, invited and reimbursed for any costs, translation needs to be organised and so forth. The guidance would be more useful if it included estimates of the extra time and resources it takes to translate it into practice.

Another issue is about LEARNING. I love the fact that there are nifty tables with sample HRGE-sensitive evaluation questions for each OECD criterion. But most questions start with the phrase „To what extent…“, which invites accountability-focused answers of the type yes/no/somehow. But isn’t evaluation also about discovering what has worked, under what conditions, and what not, and why? Obviously, these are aspects that an enterprising evaluator can discuss even under a question that starts with „To what extent…“, but we don’t always have time to add extra depth. The guidance would be more useful if it was geared to support both accountability- and learning-oriented evaluations.

There is another point I find difficult. Human rights are, by definition, indivisible and interdependent (a good reference is the definition by the Office of the High Commissioner for Human Rights). But the OECD resource invites evaluators to define which human rights principles are most relevant to the evaluand. Again, that could be OK for an intervention that focuses on a specific set of rights (e.g., political participation of indigenous people). But how can an evaluation team decide which set of rights should be considered when evaluating a solar energy project, a police training initiative or a multi-sector regional development programme? Should they privilege political over social rights, for example? Should they pick the right that it takes the least effort to consider (if no extra resources are available for the HRGE lens)? Is it legitimate to pick just one set of rights and leave aside all others? That question deserves careful consideration in future editions of the guidance.

Rather than attemting to plough HRGE concerns into all OECD criteria, one could put togehter a few minimum standards for HRGE sensitivity that should be applied (and resourced for) across all evaluations. This could include:

  • Impeccable ethics (including trauma sensitivity, as helpfully pointed out in the OECD volume)
  • A degree of equity orientation, e.g. by considering unequal distribution of desired effects, and unexpected/unwanted effects by population group
  • Communication of the evaluation purpose and findings to rights holders (as suggested by OECD)
  • Mainstreaming HRGE concerns into evaluation questions around relevance and effectiveness

Internal evaluations need external perspectives

A 2019 post from my former blog developblog.org, which I ran from 2008 to 2021.

Internal evaluation can be an excellent way to check the quality of one’s work, to track progress (in programmes or projects, for instance) and to gather information for management decisions and longer-term learning. To make the most of such exercises, they should go beyond self-reflection. Especially for small to medium-sized teams or organisations, sitting around a table and contemplating one’s strengths and weaknesses, as well as successes and failures, is a good start, but just not enough.

Things you can do to gather more insights and make the most of them:

If you regularly collect and document information from partner organisations, clients or other people involved in or affected by your work, use it! Use it to find out whether the activities you and your partners carry out do – or are likely to – contribute to the goals you pursue. Use it also to examine – or read between the lines – how the quality of your organisation’s work is perceived.
You can also bring such information to a „data party“ with people outside your organisation – for instance, some of those who are supposed to benefit from your projects, or else external specialists in the field you work in. The idea is to make sense of the information from your projects/activities together, every participant with their own perspective. (Obviously, you will have to make sure data are sufficiently aggregated and anonymised so as to avoid violating anybody’s privacy.)

If you don’t continuously gather information from those involved in your projects/activities, then you can carry out your internal reflection in stages – for instance, (1) you decide together which questions (a handful at most!) your internal evaluation should answer, and (2) then you allow for a few week’s time to gather information – for instance, in conversations with stakeholders and external persons, just like an external consultant would do in a „qualitative“ evaluation. 
If you don’t have time for that, you can replace item (2) by a consultation bringing together people who are directly involved or affected by your work. Here, external facilitation can help create an atmosphere and a work flow that enables everyone to openly share their experience and their perceptions of your organisation’s work. 

Both approaches take more time than a simple half-day workshop of navel-gazing. There is nothing wrong with workshops or short retreats – any break from a busy work routine can be beneficial. But involving others will multiply your chances to gather precious new insights. Try it out!

Two or three reasons for working in tandem

A May 2019 blog post from my former blog, www.developblog.org

Evaluations come in many shapes and sizes. I have led multidisciplinary teams in multi-year assignments, and carried out smaller assignments all by myself. Last year was a lucky year, because most of my work happened in one of my favourite configurations: the tandem or duo – as in two competent persons with complementary or partly overlapping skills and knowledge working together as evaluators on an equal or near-equal footing. Two evaluators working together – even if one of them participates for a shorter spell of time than her colleague – means so much more than the sum of two persons‘ capacities. 

Obviously, two persons can carry out more work than one, and two pairs of eyes and ears perceive more than one. More importantly, two different persons are likely to interpret data differently, from their different perspectives. In my recent tandem assignments, we – the two evaluators – discussed our findings every day when we worked in the same location. At times we’d split for a few days; in those cases, we’d exchange via the phone or a secure messenger service at least twice a week. The tandem approach forces both evaluators to analyse, distil first findings and develop conclusions throughout the evaluation process. Conversely, when you’re on your own, you must keep your impressions to yourself (confidentiality in evaluation!). On lonely evenings in hotels far from home, it can be hard to overcome the fatigue at the end of busy days to study the day’s notes – for a tandem, this routine is much more inspiring. When you evaluate across countries and/or cultures, it makes sense to work in tandems that combine different backgrounds and social identities, so that „insider“ and „outsider“ perceptions and interpretations can challenge each other and lead to stronger findings. „Objectivity“ in evaluation is a lofty goal – a team of two might not attain it, but at least, the inter-subjective setup helps keeping individual bias in check. 

Conversely, when I work as a sole elevator, all I can do is look at my own notes and apply a good dose of self-reflection to question my own findings. I can only be in one place at a time and must juggle interviewing, facilitating group discussions and note-taking. I touch-type while carrying out interviews, a mentally and physically strenuous habit – but a necessary one, because often, resources for transcribing recorded interviews are not part of the evaluation budget. When I write up my conclusions and recommendations, there is no peer to review them. In short, it is a tough, lonely exercise that potentially yields less robust results than an evaluation by a tandem. My clients appear to be very happy with the evaluations I carry out by myself. But even where resources are tight, I recommend setting up tandems – or at least, some peer review process independent from the client and the evaluand – for the evaluation. Even a couple of extra days with a suitable colleague can turbo-charge the robustness of an evaluation’s findings and recommendations.

Small group work – keep it fresh!

A post from my former blog www.developblog.org, which I will take offline soon. —

It is the early afternoon of the second workshop day; the participants are a bit drowsy from a rich lunch; messages have piled up in their smartphones and some people would prefer to deal with those rather than discussing strategy or whatever the workshop is about. Small group work is on the workshop plan. What can you do to keep it lively and productive?

#1 Avoid the classical approach of ushering groups of six to twelve persons into separate rooms („break-out rooms“): They’ll lose at least five minutes on the way there and then again on the way back. To make matters worse, some participants will disappear into the corridors to attend to their smartphones and return when it is too late for productive involvement in group work. Go for buzz groups instead: Everybody stays in the same large room (count some three square metres per participant), set up “world café” style, with participants clustered around round or square tables.

#2 Set rules for the small groups to create an effective thinking environment (see Nancy Kline’s highly commendable book Time to Think). One easy way is to insist on using a talking stick/ball/fluffy toy that every participant must hold at least once and speak, before anyone gets a second turn to speak. It is an excellent way to keep the group from being monopolised by a couple of big talkers. Also, put a clock on the table and have participants limit their verbal interventions to a maximum of three minutes each.

#3 Write each group’s assignment on a big piece of paper that stays with the group. Provide the groups with tools that help them structure their presentation. For instance, if the assignment is to map stakeholders, you can draw one of the common models on a flip chart (e.g. power/interest grid, Lewin’s force field analysis, or concentric circles to designate core/direct/indirect stakeholders, to name but a few options) and ask participants to complete it together. Also, inviting participants to compile “do’s and don’ts” can work well with group work that is about distilling lessons from experience.  

#4 If all groups are supposed to work on the same question, or on questions that converge into a bigger picture, consider using the Institute of Cultural Affairs’ Technology of Participation (ToP). A key feature of this approach is the rapid succession of individual, small group and plenary reflection and visualisation in a way that enables everyone to contribute their thoughts in a safe manner.

#5 For fresh afternoon sessions, avoid heavy (buffet) lunches, make sure there is some daylight in the room, and provide all small groups with plenty of water, coffee/tee and something to nibble on.

#6 Last but not least: Stay engaged as a facilitator! Monitor the groups‘ work, nudge them back to the question and the agreed group process if they stray from it, and be there to answer questions. Never ever dive into your smartphone while facilitating a workshop! Use the break time only.

Classism in evaluation design

This is a favourite post from my former blog, www.developblog.org, which I will soon remove from the web (after 15 years…). The post dates from from 2019 but little has changed since then – only that more people are starting to talk about equitable evaluation, which is good news.

Individual interviews for „important persons“, focus groups for „beneficiaries“, right? Wrong!

These days I have been reviewing evaluations of projects supporting survivors of traumatising human rights violations in countries that are not quite at peace, or even still at war. One would think that in such circumstances, evaluators would be particularly respectful and careful with their interlocutors, avoiding questions and situations that would make them feel uncomfortable, trigger difficult emotions or cause a resurgence of their trauma. In some cases, the opposite is true:

Some evaluators asked people to talk about their traumatising experience in group discussions with five to ten persons – neighbours or strangers, people who were brought together in a one-off two-to-three-hour meeting only because the evaluators needed data from „beneficiaries“. To obtain data from project managers or local officials, the same evaluators tended to prefer individual interviews. I see an implicit message here: People in positions of power deserve more individual attention than simple users of project services. Is that really what we want, when we evaluate projects that are supposed to strengthen people’s confidence and empower them to transform their lives, contribute to change in their societies and make this world a better one?

The problem is not unique to human rights and service-related projects. I have seen evaluations of rural development programmes where „beneficiaries“ were mainly interviewed in groups – for instance, in the convenient setting of an agricultural extension class. It is not only an issue of respect, or lack thereof; it is also a methodological problem. In group interviews, people speak not only to the person who conducts the interview, but also to everybody else who sits in the circle (or around the table). As a result, they are likely to speak in ways and about things they consider acceptable in that group setting (social desirability bias) – not necessarily about their true thoughts and feelings. Focus group discussions are not a good instrument to learn about personal thoughts and experience.

But they can be an excellent instrument for questions that are less personal, for instance, to map actors in a field the participants are familiar with, to learn about local social norms, or to get different experts‘ views on a certain topic. For instance, when a project is about health services, it can make sense to run focus group discussions with health providers: They can explain the situation in their sector, sketch typical processes, discuss together where exactly the project fits in and what contributions it may have made, and so forth. 

I would like to come back to the point of respectful interviews, especially when interviewees are survivors of traumatising violations. I did find one excellent example: The researchers designed questionnaires and interview guides that kept people from digging too deeply into difficult memories. They gave survivors a few days to think before they consented to be interviewed, and offered them the choice of the interview setting – a counselling centre, for instance, or a secluded hotel in a pleasant area. They provided breaks and meals, a couple of nights‘ accommodation if needed, as well as a post-interview check-out with a psychologist – all that to make sure any distress caused by the interview could be dealt with. Coincidentally, the researchers worked in a European country. There is no reason why one shouldn’t work that way in Africa or Asia, is there?

Evaluation in times of COVID-19

This post is part of a series of contributions to my former blog www.developblog.org, hosted by a different platform for a whooping 15 years (since 2008)! I am closing down the old blog and moving some interesting reading to this, new setting. The following post distils lessons from lockdown times.

What does the surge of SARS-CoV-2 (the scientific name of the new coronavirus) infections in parts of Europe mean for international evaluation? Can we, as evaluators, join the soothing voices of those who say, the current common flu epidemic has killed many more people and there is no reason to change anything in our lives? I don’t think so. I would like to remind all of us of the Do No Harm principle: Research ethics require us to carefully weigh the potential benefits of undertaking research (at a given time) against the potential harm associated with it. We can relax about ourselves but we must not endanger others. International evaluations can also be done without international travel.

That is why yesterday, I decided to postpone a case study in an Asian country that has relatively few known coronavirus infections – not because I was worried I would contract the virus, but because I could pass it on to others. I live in Berlin, a city of 3.5 million inhabitants where some 58 cases of SARS-CoV-2 have been detected so far (yesterday’s data). That may seem little. But while I formed my decision, it turned out that a close colleague’s partner who had been in contact with a Covid-19 patient had developed symptoms of Covid-19 (the name of the disease the virus causes). A few hours later, the Guardian (UK) published an article relating how an apparently healthy British couple contracted SARS-CoV-2 during air travel to Vietnam and left a trail of infected people wherever they went – several places spread across Vietnam. 

The health advice published in Germany is to avoid all unnecessary travel. Evaluations are as necessary as ever – yet, most of the time, postponing them would hardly threaten anybody’s existence (apart from evaluators‘ flow of earnings – a risk entrepreneurs are used to). As a matter of fact, many evaluations happen late anyway because of poor planning – see for instance my 2012 post on evaluation planning

Going on as if there was no public health risk associated with a new, rapidly spreading and potentially deadly virus threatens other peoples‘ lives, especially in countries where health systems are in poor shape or already overstretched. Especially when travelling to remote regions, we might carry the virus to populations who, by their relative isolation, could be relatively protected if we stayed away. Remember how UN peace keepers introduced cholera into Haiti? Find here the UN Secretary General’s apology (2016). The history of colonialism is full of examples of European diseases wiping out previously sheltered communities.

What if the evaluation is really urgent, for instance a condition for subsequent project funding (assuming there is no way to re-negotiate the condition in view of a public health crisis)? Work with national evaluators! Even in organisations that find it vital to have an „international“ on their evaluation teams, it is established good practice – even in smaller evaluations – to work with „mixed“ national/international teams. See also my post on „two are better than one“

If you, as someone who commissions an evaluation, feel you must have an international consultant on the team, invite her to work remotely: Where internet connections are good, workshops, group discussions and interviews can be accompanied via Skype, WhatsApp or a more secure video messaging service. Data collected by the national evaluation team can be analysed in regular phone conferences. Time and resources permitting, the national team can have all its activities audio-recorded, transcribed (and translated, if needed) in full, so that the remote evaluator can follow closely what is happening. There are many options, which can also come in handy if we get more serious about reducing the environmental impact of international travel. I have used these options in my evaluation practice and they have yielded good results. 

Remember the old saying about development being all about working „ourselves“ (in the „global North“) out of „our“ business? That applies to international evaluation, too: Let’s strive to ‚localise‘ evaluation while developing a rich flow of knowledge and skills exchange across the world!

Know what you need to know

This is a blog post written in 2020. I have taken it from my old blog, www.developblog.org, which I will close down later this year.

Evaluations often come with terms of reference (TOR) that discourage even the most intrepid evaluator. A frequent issue are long lists of evaluation questions that oscillate between the broadest interrogations – e.g. “what difference has the project made in people’s lives” – to very specific aspects, e.g. “what was the percentage of women participating in training sessions”. Sometimes I wonder whether such TOR actually state what people really want to find out.

I remember the first evaluation I commissioned, back in the last quarter of the 20th century. I asked my colleague how to write TOR. She said, “Just take the TOR from some other project and add questions that you find important”. I picked up the first evaluation TOR I came across, found all the questions interesting and added lots, which I felt showed that I was smart and interested in the project. Then I shared the TOR in our team and others followed suit, asking plenty more interesting questions.

I wonder whether this type of process is still being used. Typically, at the end, you have a long list of “nice to know”-questions that’ll make it very hard to focus on questions that are crucial for the project.  

I know I have written about this before. I can’t stop writing about it. It is very rare that I come across TOR with evaluation questions that appear to describe accurately what people really want and need to find out. 

If, as someone who commissions the evaluation, you are not sure which questions matter most, ask those involved in the project. It is very useful to ask them, anyway, even if you think you know the most important questions. If you need more support, invite the evaluator to review the questions in the inception phase – with you and all other stakeholders in the evaluation – and be open to major modifications.

But please, keep the list of evaluation questions short and clear. Don’t worry about what exactly the evaluator will need to ask or look for to answer your questions. It is the evaluator’s job to develop indicators, questionnaires, interview guides and so forth. She’ll work with you and others to identify or develop appropriate instruments for the specific context of the evaluation. (The case is somewhat different in organisations that attempt to gather a set of data against standardised indicators across many evaluations – but even then, they can be focused and parsimonious to make sure they get high quality information and not just  ticked-off boxes.) 

Even just one or two evaluation questions is a perfectly fine amount. Anything more than ten can get confusing. And put in some time for a proper inception phase when the evaluation specialists will work with you on designing the evaluation. Build in joint reflection loops. You’ll get so much more out of your evaluation.

Gender equality in organisations

This is a blog post from 2020, moved from my former blog www.developblog.org which I will take offline near the end of this year.

Gender equality is a key element of sustainable development – as illustrated in the Sustainable Development Goals (SDGs), which weave gender across virtually all 17 SDGs. It makes sense that ‚mainstream‘ organisations, which are not specialised in promoting gender equality, have developed gender policies and related activities. Where are they at, and what should come next?

Vera Siber and I carried out a study with four German organisations to find out about their work on gender justice: a political foundation, two non-governmental organisations (NGOs) specialised in international development (a faith-based one and a secular one), and a scientific agency attached to a federal ministry. The four organisations differed in the scope of their work, their size, and the degree to which they stated gender justice as an explicit goal – but they came together to commission our study.  We reviewed documentation produced by the four organisations and interviewed some 50 persons representing different perspectives within those groups.  

The framework developed by Gender at Work guided our analysis. It is a matrix around two axes: formal/informal and individual/systemic. That is, it defines four realms: The individual/informal square relating to personal consciousness and capabilities, the systemic/informal one to unwritten norms and practice. The individual/formal square refers to individual resources, the systemic/formal one rules and policies. 

The four squares of the matrix look different in each of the four organisations (cases) we researched. On the formal/systemic side, all cases displayed gender policy papers, but the documents varied enormously in their scope and precision. Three organisations employed gender specialists; one did not. In all cases, staff members from different departments met regularly to discuss gender issues – but only in one case, job descriptions allocated time for those activities. The degree to which gender was integrated in planning and monitoring processes varied widely. On the formal/individual side, women in one case found it easier to reach leadership positions thanks to an adapted recruitment process, and dedicated mentoring and leadership training.

Our study has confirmed the notion that gender mainstreaming unfolds tangible outcomes when combined with specific work on gender equality. For example, one organisation had supported women’s organisations in South Asia for many years. They introduced those organisations to ‘mainstream’ grantees – i.e., grantees with no specific feminist agenda – to strengthen their thinking and action so that women and girls could contribute to and benefit more fully from their work. In the same case, success stories and pressure by feminist grantees contributed to reshaping the donor’s overall regional strategy. 

The informal side of the Gender at Work matrix is to a great extent about individual commitment – present in all four cases we reviewed –, and organisational culture. In one case, committed staff members put in their ‘own’ time to organise internal workshops on gender. In that way, they built knowledge within the organisation ‚bottom-up‘, and pressured for more support from the top levels for gender equality. In an opposite case, organisational leadership successfully pushed for the implementation of a progressive gender policy. This top-down approach, arguably necessary when attempting to mainstream gender across an organisation and its work, has raised worries among some of our interlocutors: Would it still be possible to openly voice doubts, start controversial discussions and introduce new ideas? 

Our study could not answer that question. What emerged clearly was that even organisations with rather advanced systems for gender mainstreaming must continue to update their knowledge and re-examine their goals and approaches regularly, as new needs and interests emerge. For instance, work on the rights of lesbian, gay, bi- and intersexual, transgender and queer persons (LGBTIQ), as well as intersectional approaches that take into account multiple discriminations, were still in their infancy in most cases. Also, from 2017 on, the #metoo movement against sexual harassment at work sparked a need to introduce or strengthen policies and processes. At the time of our research, anti-harassment policies had only just been introduced – or were still in the process of development – in most of the reviewed organisations. 

There is no end point to work on gender equality. It takes constant, deliberate, and well-informed efforts to secure the commitment of everyone in an organisation and to ensure its work contributes to gender equality in a changing world. At the very least, organisations should make sure they do not deepen existing inequalities (do no harm). 

The 2030 Agenda for Sustainable Development exhorts all states to leave no-one behind: Diversity and the ensuing differences in people’s needs and interests must be acknowledged and dealt with. Sexism, racism, and other forms of discrimination within organisations and beyond must be identified and countered. There is plenty of instructive experience around the world – organisations can tap into it by multiplying opportunities for exchange, open debate, and joint learning. All this requires dedicated resources. 

Why not try out gender budgeting, i.e., a process whereby organisations systematically examine their budgets against the anticipated effects on gender equality? If international development agencies can teach governments in the ‘global South’ to introduce gender budgeting, surely, they can do it within their own systems? If these agencies require their partner organisations to display a gender-balanced leadership structure, surely, they can organise their own leadership along the same lines? Would that be a good resolution for 2021 and beyond?

Thoughtful guidance on applying evaluation criteria

This is a blogpost from 2021, moved from my former blog www.developblog.org which I will take offline later this year.

Long-awaited new guidance on applying the evaluation criteria defined by the Development Assistance Committee of the Organisation for Economic Cooperation and Development) (OECD-DAC) is finally available in this publication! Long-awaited, because evaluators and development practitioners have grown desperate with assignments that are expected to gauge every single project against every single OECD-DAC criterion, regardless of the project’s nature, and of the moment & resources of the evaluation. This new, gently worded document is a weapon evaluators can use to defend their quest for focus and depth in evaluation.

Those who commission evaluations, please go straight to page 24, which states very clearly: „The criteria are not intended to be applied in a standard, fixed way for every intervention or used in a tickbox fashion. Indeed the criteria should be carefully interpreted or understood in relation to the intervention being evaluated. This encourages flexibility and adaptation of the criteria to each individual evaluation. It should be clarified which specific concepts in the criteria will be drawn upon in the evaluation and why.“

On page 28, you will find a whole section titles Choosing which criteria to use which makes it clear that evaluations should focus on the OEC-DAC criteria that make sense in the view of the needs and possibilities of the specific project, and for the evaluation process. It provides a wonderful one-question heuristic: „If we could ask only one question about this intervention, what would it be?“ And it reminds readers that some questions are better answered by using other means, such as research projects or a facilitated learning process. The availability of data and resources – including time – for the evaluation helps determine which evaluation criteria to apply, and which not. Page 32 reminds us of the necessity to use a gender lens, with a handy checklist-like table on page 33 (better late than never).

About half of the publication is dedicated to defining the six evaluation criteria – relevance, coherence, effectiveness, efficiency, impact, and sustainability – with plenty of examples. This is also extremely helpful. Each chapter comes with a table that summarises common challenges related to each criteri on – and what evaluators and evaluation managers can do to overcome them. It also shows very clearly that lack of preparation on the evaluation management side makes it very hard for evaluators to do a decent job – see for example table 4.3 (p.55) on assessing effectiveness. 

The document is a bit ambiguous on some questions: The chapter on efficiency still defines efficiency as the conversion of inputs (…) into outputs (…) in the most cost-effective way possible, as compared to feasible alternatives in the context“ (p.58), which makes it extremely hard to assess the efficiency of, say, a project that supports litigation in international courts – interventions that may take decades to yield the desired result. However, the guidance document states that resources should be understood in the broadest sense and include full economic costs. On that basis, one can indeed argue, as Jasmin Rocha and I have on Zenda Ofir’s blog, that non-monetary costs, hidden costs and the cost of inaction must be taken into account. Yet, table 4.4 on efficiency-related challenges remains vague (p.61). Has anyone read the reference quoted in the table (Palenberg 2011)? I did and found it very cautious in its conclusion. My impression is that in many cases, evaluators of development interventions are not in a position to assess efficiency in any meaningful manner.

On the whole, I would describe the new OECD-DAC publication as a big step forward. I warmly recommend it to anyone who designs, manages or commissions evaluations.