• 12 December 2014

    |

    Category : Opinion

    |

    Do development programmes work?

    Can you imagine doctors prescribing medicines without knowing whether or not they work? What it would be like if they tried out a series of treatments on us until one, by chance, happened to cure our disease? Can you imagine how long our illnesses would drag on while they searched for the right drug? And the money we'd waste trying ineffective medicines fruitlessly? Luckily, drugs have been tested for effectiveness since the 18th century, so that when we go to the doctor, he or she knows exactly which drug will cure us.

    Unfortunately, in the world of development programmes, we’re closer to the former situation than to the latter. Every year thousands of programmes designed to fight underdevelopment are carried out, yet we know very little about their effectiveness. Do vocational training programmes reduce unemployment? Which programme is the most cost-effective for preventing children from dropping out of school in Africa: scholarships or antiparasitic pills?

    The truth is that to find this out, as in the field of medicine, study is required. Impact evaluations are tools that measure the effects of programmes compared to their final development objectives. For example, thanks to impact evaluations like this one in the Dominican Republic, this one in Colombia and this one in Turkey, today we have evidence that vocational training programmes have positive effects on the quality of work, but that they have very modest effects on reducing unemployment. In addition, thanks to evaluations like the ones summarized here, today we know that for every 100 dollars spent on deparasitization programmes in Africa, we increase school attendance in the population by 13.9 years, while the same money spent on scholarships only increases this by 0.27 years.

    Only with evidence like this can policy-makers in the public sector have valid tools for establishing the right programmes for combating underdevelopment in order to solve problems faster and waste fewer resources.

    To measure the effectiveness of programmes, impact evaluations use control groups. The treatment group consists of a group of persons that received the programme, and the control group consists of a group of persons that did not (just like in drug studies!).

    The greatest challenge of these methodologies is finding a valid control group. And that is not always easy. For example, to measure the impact of a vocational training programme, we might think to compare employment levels before and after participation in the programme.

    If the difference was positive, would you say the programme had been effective? At first glance, that might seem to be the case, but it might also just be that the economy improved, increasing employment levels, and that the programme was actually totally ineffective. To avoid this problem, we might think to compare a group of unemployed persons that signed up to participate in the programme with another group (affected by the same economy) that did not. If the programme participants found more work, would you say that the programme had been effective? Again that might seem to be the case, but it might also be that the people in the group that signed up did so simply because they were more motivated to find work, and this motivation was the reason they found more work (and not the programme).

    Therefore in impact evaluations, for the control group to be considered valid, it must be a group that was subject to the same conditions at the same time as the treatment group (to avoid the first problem) and which, at least on average, is equivalent to the treatment group in every way (to avoid the second problem). The most rigorous way of generating valid control groups is to randomly assign people to participate and not participate in the programme. This methodology, called experimental, simply means assigning people to the programme using a lottery. However, sometimes it is not possible to randomize. For example, the programme may be delivered to the entire population, or delivered based on a poverty ranking, making it impossible to assign it using a lottery. In this case, there are other methodologies that can help us find good control groups. These methodologies are called quasi-experimental, and although they require more assumptions (which makes them “weaker” technically), they are also good tools for measuring impact.

    In recent years, interest in measuring the effectiveness of programmes has increased and, as a result, impact evaluations of development programmes have increased considerably. International institutions, such as the Inter-American Development Bank and the World Bank, are two good examples of institutions in which impact evaluations have become a key for learning about what works in order to provide feedback on project design. In this publication, you can see some of the BID’s lessons learnt in 2013 and here some of the evaluations carried out by the World Bank.

    In the academic world, there are also study centres dedicated to searching for rigorous evidence to show which programmes work and which do not. Some of the most important ones are the Poverty Action Lab (J-Pal, associated with MIT), Innovations for Poverty Action (IPA, founded by a Yale professor), and the Center for Evaluation for Global Action (CEGA, at UC Berkeley). There are also global initiatives, such as the Network of Networks for Impact Evaluation (NONIE) and the International Initiative for Impact Evaluation (3ie), dedicated to this area.

    With each impact evaluation, we learn a little bit more about which programmes work best and why. Nevertheless, we are still a long way from being as effective as doctors when we “prescribe” programmes in an attempt to cure the “disease of underdevelopment”.

    The author has sole responsibility for the opinions and comments expressed in this blog

    The views and opinions expressed in this blog are the sole responsibility of the person who write them.

facebook twitter linkedin