Thursday, September 11, 2008

Q & A with Esther Duflo

Professor Esther Duflo, the co-director of the Abdul Latif Jameel Poverty Action Lab at the MIT, has answered questions asked to her through Managing Globalization blog. She has also answered the question I asked about policy prescription on macro level from the conclusions derived from RCTs.

Professor Duflo is "one of a new wave of development economists who have been instrumental in changing the focus of their field - away from one-size-fits-all solutions and towards specific, detailed studies of ground-level problems in poor areas. She is known for highly rigorous work addressing the roots of poverty in India, Kenya, South Africa and elsewhere. Her topics have included education, health and pollution, saving patterns and even the rate of return on fertilizer."

Q. How far can conclusions derived from randomized controlled trials be stretched when it comes to policy prescription to tackle poverty? If we find out that a certain intervention is having a positive impact on the fight against poverty, then how appropriate would it be to prioritize the intervention at a macro level and deduce national economic policy based on the result of the intervention? Besides local political and economic institutions, what other factors should we be careful of when scaling up policies which are rigorously and successfully tested at a micro level?

Chandan Sapkota
United States

A. This question, as well as Kartik’s (below), is an excellent question, and both are related. Let me start by reminding everyone what a randomized control trial is, and how they are used to evaluate poverty alleviation intervention. You can find much more information on the site of the Abdul Latif Jameel Poverty Action Lab. In particular, we describe randomized control trials and their rationale in some detail.

Generally, people or communities who benefit from an intervention are not comparable. For example, schools that receive extra textbooks may be either richer or poorer than other schools: they could be richer because only rich schools can afford books, or poorer because an NGO has decided to give textbooks to the poorest schools in the area. This makes it very difficult to evaluate the impact of extra textbooks by comparing schools with and without them. Randomized evaluations follow the lead of medicine: a sample of schools is selected, and half of them are randomly assigned to receive the textbooks (usually, the other half is also given textbooks after the experiment is concluded and the results are out). If the sample is large enough, we can now be sure that the children in schools with and without extra textbooks differ only because of the textbooks. When we compare their scores after a year, we can be sure that any difference is due to textbooks, not to something else. Michael Kremer and colleagues ran exactly such an experiment in Kenya, and found quite surprising results, which you may want to check out….

Now, let me turn to your question. When we run an experiment and we get the results, we know the effect of this program had in this particular place. This is much better than the information we have in general to decide on policy (nothing…), but is it good enough to act on and to move on to recommend a more general policy? There are several obstacles.

First, the results may not replicate across contexts; I discuss that in the next answer.

Second, a program may be implemented in very different ways in a large scale. For example, it may be done well by a non-governmental organization, but corruption problems may creep in when it is implemented by a government. These implementation issues will have to be ironed out. This is important, and scaling up challenges have to be considered, but it does not take away from the finding that we now know what the potential of the program would be if it were correctly implemented. If we find an effective program, this suggests that it is worth investing some effort in figuring out how to correctly implement it on a large scale. This can also be experimented with, by the way: some of the very exciting work in development economics these days is precisely about how to effectively implement programs (see for example Ben Olken’s work, which I discussed in response to another question).

Third, there may be market equilibrium effects. For example, if I find that by randomly offering secondary school scholarship to some kids, I increase their wage, compared to those who did not receive the scholarships, this may not tell me what the effect of doing this nationwide would be: if everybody received a secondary education, the returns to secondary school may go up or down, compared to a situation where few people received a secondary school education. There are two ways to deal with these: in some cases, it may be possible to organize experiments at the “market” level (though I think it would be hard in the example I just described). In others, we have to use a priori economic reasoning to think whether market equilibrium effects are going to be important or not. In many cases, we have no reason to think they would be large enough to undo the effect of the policy.

Abhijit Banerjee and I discuss these issues in a recent article (“The Experimental Approach to Development Economics”), which I have posted on my web page at MIT.

The whole Q&A with Duflo is very enriching and she articulately and in great detail answers other questions related to poverty reduction and the work done through the MIT's Poverty Action Lab.

Personally, the three fields, in development economics, that I have great interest in and want to work on in grad school are growth diagnostic approach, RCTs, and economic policies (on growth and development) derived from a sound theoretical and a rigorous experimental approach from the first two fields. As for my senior honors thesis I am exploring the growth diagnostic approach and will apply it in the context of Nepal (and if possible Burkina Faso). My future Op-Eds (columns) will be about the identification of constraints to economic growth in the Nepali economy. Read my earlier Op-Eds here. The latest one is here.