Result-based M&E and Outcome and Impact Evaluation
2015 Annual Edition Result Based Monitoring and Evaluation, with Ray Rist, instructor - Monday-Friday, 7-11 September 2015 Outcome and Impact Evaluation, with Michael Bamberger, instructor - Monday-Friday, 14-18 September 2015 Extended deadline for applications: 31 August 2015 | ||||||||||||||||||||||
Our courses
As part of the 10th Annual Edition of the joint Bologna Centre for International Development / Department of Economics Summer School Programme on Monitoring and Evaluation the programme's focus for the September 2015 modules is on Result-based Monitoring and Evaluation (first module) and Outcome and Impact Evaluation (second module). The first module on Result-based Monitoring and Evaluation will provide two workshops - one on how to design and build a results-based monitoring and evaluation (M&E) system in your organization; and a second on how to design, collect and analyze data for a case study evaluation. The first workshop is based on the Ten Steps to a Results-Based Monitoring and Evaluation System developed by Jody Kusek and Ray Rist at The World Bank. The emphasis will be on how to plan, design, and implement such a system, beginning with the readiness assessment, moving along through goal setting, indicator selection, establishing baseline data and setting targets, to means of data analysis and reporting, to ensuring use and sustainability. The focus of this approach is on a comprehensive ten-step model that will help guide you through the process of designing and building a results-based M&E system. These steps will begin with a “Readiness Assessment” and will take the audience through the design, management, and, importantly, the sustainability of their M&E system. The handbook describes these steps in detail, the tasks needed to complete them, and the tools available to help you along the way. Throughout this workshop, participants will discuss the ten steps, the tasks needed to complete them, and the tools to help along the way. This workshop will last for three days and reading materials will be provided. The second workshop will be for two days and will focus on the Uses (and Abuses) of Case Studies in Evaluation. Although case study methodology is often useful in addressing evaluation questions, choosing the right type of case study is as important as choosing the methodology. This workshop defines what is a case study, focusing on when to use it and what type of case study to deploy. Participants will critique several types of case studies and will also conduct several short field work exercises. Again, reading material will be provided. This course is primarily targeted toward officials who are faced with the challenge of managing for results. Developing countries in particular have multiple obstacles to overcome in building M&E systems. However, results-based M&E systems are a continuous work in progress for both developed and developing countries. When implemented properly these systems provide a continuous flow of information feedback into the system, which can help guide policymakers toward achieving the desired results. Seasoned program managers in developed countries and international organizations—where results-based M&E systems are now in place—are using this approach to gain insight into the performance of their respective organizations. The course, which will be delivered by Ray Rist himself, can be taken as a guide on how to design and construct a results-based M&E system in the public sector. The goal of the course is to help prepare one plan, design, and implement a results-based M&E system within one’s organization. In addition, the course (and the handbook on which it is based on) will also demonstrate how an M&E system can be a valuable tool in supporting good public management. Here is the calendar for the first module: FIRST WEEK AGENDA The second module on Outcome and impact Evaluation, which will be delivered by Micheal Bamberger, will provide three workshops - the first on how to identify unintended outcomes of development programs; the second on incorporating gender into monitoring and evaluation systems; the third on evaluating complex development programs. The first workshop (two days) - Identifying unintended outcomes of development programs - addresses two questions “Why do many development evaluations fail to detect unintended outcomes?” and “How can planners and evaluators address these challenges?”. As all experienced program evaluators know, few, if any, development programs turn out exactly as planned. Such is the case of programs that prevent (intentionally or unintentionally) certain groups from accessing services or, even worse, that deteriorate living conditions among some groups. The workshop will review why so many evaluation designs – including randomized control trials, quasi-experimental designs, theory-based evaluations and results-based M&E systems - often fail to capture unintended outcomes. The workshop will examine the situation with respect to theories of change (TOC). While TOCs can, and often are, designed to identify unintended outcomes, experience shows that in many cases the agencies commissioning the evaluation are mainly interested in whether programs have achieved their intended outcomes and evaluators are sometimes discouraged from investing time and resources to identify unintended outcomes. We will present examples where TOCs are designed to identify unintended outcomes and will draw lessons on ways to strengthen the ability of all TOCs to address these outcomes. The workshop will cover the following topics: (1) Classifying unintended outcomes of international development programs; (2) Examples of evaluation designs failing to identify unintended outcomes and the often serious consequences; (3) Methodological, political and real-world constraints that explain why conventional evaluation methodologies often fail to capture these outcomes; (4) Case studies illustrating how randomized control trials and quasi-experimental designs often fail to identify unintended outcomes; (5) Strategies that planners and evaluators can adopt to strengthen their ability to identify unintended outcomes (e.g., through a creative and innovative use of mixed methods approaches). The workshop will encourage active group participation and will encourage participants to share their experiences on challenges and useful strategies for addressing unintended outcomes. A number of group exercises will be included so that participants can apply the tools and techniques presented in the workshop. The second workshop (one day) - Incorporating Gender into Monitoring and Evaluation at the Country, Program and Project Levels - discusses the challenges and opportunities for M&E to address gender inequalities at the national, program and project levels. Despite significant progress, gender inequalities persist in all countries. These inequalities both negate fundamental human rights and present serious barriers to the achievement of national development objectives and the promotion of equity, human rights and social justice. Notwithstanding widespread commitment to gender equality by governments and development agencies, and despite the compelling evidence on persistent gender inequalities, conventional M&E systems fail to address gender differences. The workshop reviews experience and provides guidelines and tools and techniques for developing gender-responsive M&E studies and systems at the national, program and project levels. It draws on international experience of governments, donor agencies, and NGOs to outline the main steps in the design and implementation of gender-responsive M&E systems and approaches. The workshop will include examples of the serious problems that can arise when gender is not addressed and also examples where gender has been successfully addressed at the national and project levels. In addition to discussing methodological issues the workshop will focus on the real-world challenges and opportunities to incorporate gender into organizations that have budget and professional resource constraints and where management and staff may have reservations about the value of incorporating gender. The third workshop (two days) - Assessing the outcomes and impacts of complex programs - reviews strategies for assessing outcomes and impacts of complex development interventions, such as general budget support, multi-component sector programs, and cross-cutting thematic programs (e.g. gender mainstreaming or peace-building). The challenge for evaluators is that conventional evaluation methods such as randomized control trials and quasi-experimental designs, and many qualitative methods are not able to address the complex characteristics of the contexts in which programs are conceived and implemented, or specific traits of the programs themselves. Exploring the interactions that can possibly unfold between program features and context traits requires tracing multiple causal paths and allowing for emergent designs where program objectives and intended outcomes change as the program evolves. The central message is that complexity requires innovative evaluation designs by selecting a variety of methods among the many at our disposal. Practical tools and techniques are available to assess the outcomes of complex programs. Many, but not all, of the proposed approaches involve “unpacking” complex programs into a set of components or elements that are easier to evaluate. The challenge is then to “repack” the findings to recognize complexity and avoid simplification while addressing relevant evaluation questions. The workshop reviews the contribution of complexity science in exploring the linkages between contexts and program designs. A wide range of practical evaluation tools are discussed, and illustrated with examples from the field, including: theory-based approaches; quantitative, qualitative and mixed-methods designs; rating systems (often based on OECD/DAC criteria); and innovative Web-based approaches, such as concept mapping and other forms of expert consultation, crowdsourcing and participatory planning, and the multiple applications of big data. Participants will apply different strategies in group exercises. Participants are encouraged to bring their own evaluations of complex interventions to illustrate promising approaches and to seek suggestions from their colleagues. Here is the calendar for the second module: SECOND WEEK AGENDA Our People
Direction and management:
How to apply
| ||||||||||||||||||||||