Lessons from the Poverty Action Lab

Speaker’s Corner From The Practice September/October 2017
An interview with Rachel Glennerster

Rachel Glennerster, the executive director of the Abdul Latif Jameel Poverty Action Lab at MIT, recently sat down with Jim Greiner, faculty director of the Access to Justice Lab, for a one-on-one conversation on the application of rigorous empirical research methods to help answer policy questions.

Jim Greiner: My name is Jim Greiner, and I am the faculty director of the Access to Justice Lab within the Center on the Legal Profession at Harvard Law School. With me is Rachel Glennerster, who is the executive director of the Abdul Latif Jameel Poverty Action Lab, or J-PAL, located at MIT. Could you tell us first off, what is J-PAL?

Rachel Glennerster: J-PAL is a center at MIT that promotes and conducts randomized impact evaluations of programs related to poverty. We’re a research center, but we’re also a network of affiliated professors around the world—over 150 now.

Greiner: What sorts of projects do you typically pursue with the randomized control trials, or RCTs, as we’ll call them?

Glennerster: We look at anything to do with poverty. That could be in Chicago or Chennai. There are no geographic bounds to our work. We look at education and health, finance, women’s empowerment—a very broad range of things that affect the lives of poor people. The common feature in everything we do uses the RCT methodology.

To give you an example, one of the things I’m working on at the moment is an evaluation of a program with Save the Children in Bangladesh. The program looks at whether an empowerment program and/or an incentive to reduce child marriage could improve the lives of young women in Bangladesh. Save the Children was planning to reach 90,000 girls in three districts. We encouraged them to spread out their activities to more districts in Bangladesh and then randomly pick the communities where they implemented their programs, while still reaching 90,000 girls. We then went in and studied the communities where they worked and compared them to communities where they did not work to see if girls were more empowered. For instance, we looked at whether they went to school more, earned more money, and whether the age of marriage was different. We measured before the program, shortly after the program began, and now nine years later—both in the communities Save the Children operated in and those they did not.

Big international agencies are putting their programs up to rigorous impact-evaluation methodologies much more frequently than they were in the past.

Rachel Glennerster, Poverty Action Lab

One of the challenges that you have in knowing if strategies work is teasing out the impact of different factors. For example, in child marriage, a whole lot of factors go together. The girls who are educated are also the ones who are more likely to come from more empowered families—and those with lower rates of child marriage. It’s very hard to disentangle what is causing what. In fact, in this program, we noticed that girls who were more empowered to begin with were actually more likely to sign up for the empowerment program. Therefore, if you just compared the girls who signed up for the program to those who didn’t, you might conclude, “Oh, these girls are more empowered because of the program,” when in fact, they were more empowered even before the program started—that’s why they signed up! We had to be very careful in disentangling these causes and effects.

Greiner: It makes sense why you have to have a comparison group, and don’t compare those who do and don’t choose whether to participate in the program. But in law, we put a lot of stock in professional judgment. As lawyers or as judges, what we often want is to be able to pick who gets what program and who doesn’t on the basis of our professional judgment. And surely, if you’re allowed to pick who gets what program and who doesn’t, then you’d be able to compare who got it and who didn’t and be able to say, “I can tell the effect of the program from that.” Is that true?

Glennerster: We, too, put a lot of faith in judgment in international development. Agencies want to choose to target their relief programs at the people who are most in need. The problem is, if you use judgment and then compare results for who does and does not participate in a program, you might end up with completely the wrong conclusion. If, as a hypothetical, Save the Children chose the worst affected communities in Bangladesh to do their program and then compared those to the ones that they did not choose to determine the impact of their program, they might conclude that their program was actually causing problems whereas in fact they just effectively targeted those in most need to start with. We should also recognize that often our judgment is not very good. We often don’t target that well. There is usually room to pick more people who are similar than you have the ability to serve. Sometimes people think they are not targeting and say, “Oh, we chose randomly,” but there was usually something that drove selection unless you’ve actually gone through the process of randomizing.

Greiner: It sounds like there are two things going on. First, when the professional makes judgments about who should get a program—in law, which set of cases should get a program—the professional may make inappropriate judgments and, in fact, may pick certain kinds of cases and not others. On top of that, even if the professional is well meaning, the professional may pick cases that are in fact really hard in some sense—perhaps the most disempowered girls in your empowerment program or, in law, the hardest-to-settle cases. Then, the comparison seems to say that the program stinks when actually what was going on was that they were the hardest set of cases to begin with.

Before J-PAL came along, many decisions in international development were based on experts’ instincts—the kinds of judgments that we’ve just been discussing. Now it looks like, largely due to J-PAL and the work that it’s done, the RCT is more and more the gold standard. How did that change occur? How did J-PAL accomplish that?

Glennerster: I wouldn’t want to give all the credit to J-PAL for that change in development. I also wouldn’t want to say that RCTs are the only valid method. But, it’s fair to say there has been a big increase in the focus on rigor when it comes to evaluating what we do in international development. Big international agencies are putting their programs up to rigorous impact-evaluation methodologies—the most common of which is the RCT—much more frequently than they were in the past. For example, I have seen estimates that in certain years the World Bank has subjected around 30 percent of their programs to some sort of rigorous impact evaluation.

If people self-select into a program, those who select in are arguably very different from those who do not.

It’s also the case that what we think is effective has changed because of rigorous study. How did that come about? It was a culmination of things. For instance, there was increased interest on the part of the academic community in getting their hands dirty in the nitty-gritty of how programs worked. J-PAL worked hard to get these academics teamed up with the organizations that worked on the ground. Together, we helped the organizations design new programs and ways to rigorously evaluate their programs, including bringing in an element of randomization. There has also been a lot of dissemination—academic journals, policy workshops, and long-term partnerships between academics and governments and other implementers. This helps governments and others work through the implications of what they do. J-PAL and organizations like ours have in many ways acted as the glue between academics and practitioners in thinking about how to evaluate more rigorously and also how to take on board the results of studies that have been done by others.

Greiner: Can you give us an example of a study that J-PAL’s involved in where you felt like an RCT was really needed to get to the bottom of something—something where an RCT even overturned conventional wisdom on a subject?

Glennerster: A good example of that is microcredit. Microcredit is the idea—and now widespread practice—of providing small loans to the poor that aren’t based on collateral, where the borrower is part of a group, and where there’s some kind of collective responsibility for repaying the loan. It leverages social connections and knowledge in the group about who is likely to repay. And, indeed, microcredit typically has a very high repayment rate. It has spread throughout the world such that millions—often women—receive microcredit as a means of building businesses. Muhammad Yunus, one of the pioneers of microcredit, won the Nobel Prize for the idea.

We noticed, however, that there were two potent assumptions that supported the advancement of microcredit programs. First, there was a belief that it was a significant way to reduce poverty by allowing people to have access to credit. Second, because women were often the recipients of the programs, there was a belief that it would grant them increased decision-making power. However, there was extremely little evidence about whether any of this was effective on the ground. For the most part, microcredit was promoted by simply telling stories of women who had got these loans and had done amazing things with their lives. But there are always amazing women who’ve done amazing things. To figure out whether they’ve done amazing things because of the microcredit is much harder.

Some initial studies attempted to evaluate microcredit by comparing women who’d taken out microcredit loans to women in the same communities who had not. But we get back to the problem: if people self-select into a program, those who select in are arguably very different from those who do not, thereby making it hard to isolate whether or not microcredit was the critical factor in reducing poverty. Because only the people who stepped forward got the microcredit, unless you have a clean comparison group who would have come forward but weren’t offered the program, it is hard to get clear results. That is what an RCT allows.

If we want to be ethical, we better know that what we’re doing is effective.

So, we did this. There was a whole series of impact evaluations—mostly by people within J-PAL—working with microcredit organizations. In the case I was involved in, the microcredit organization picked about 120 communities that, with an unlimited budget, they would have gone to. Of course, they didn’t have the capacity to go to all 120, so we randomly picked 60 from that list. We then compared the results in the communities where they went with the ones they did not go to. The results? There was no reduction in poverty in the communities that had access to microcredit. Nor was there improvement in women’s empowerment. (There was an increase in the number of businesses set up by women, but not businesses that led to a reduction in poverty.) As a result, people’s perception of microcredit has really changed. It’s seen as a useful tool now, but not something that can really affect poverty rates.

Greiner: At J-PAL, do you encounter resistance to using the RCT method in the field? What kinds of objections do you frequently encounter, and how do you meet the objections?

Glennerster: We got a lot of objections at the beginning from within the development profession as well as from those in the “evaluation profession” because they were used to using other techniques and didn’t like hearing those techniques were not as credible as they thought. There were also those who criticized RCTs as unethical. Interestingly, we hardly ever got criticism in the communities where we were working. The “unethical” criticism was based on a misunderstanding of how we work. That criticism was often along the lines of, “How can you deny people access to a program just in order to evaluate it?” The misconception there is that it’s really not about denying access to a program. Ninety-five percent of the time, we are working with the implementer to change where they implement their programs rather than reduce the number of people who benefit. Most recently, people got very upset about the idea of evaluating the effect of eyeglasses on children’s learning. I would love to be in a world where every kid who needs them got eyeglasses in China. But if there’s only a limited amount of money to spend on them, it might be worth figuring out how much it increases their learning with the eyeglasses. And then, with the results, you might be able to fund-raise and get enough money to give eyeglasses to everyone.

Greiner: I imagine that it wouldn’t surprise you to know I hear the same sort of objections when we talk about, say, representation for domestic violence victims and civil protection order proceedings. The lawyers have the capacity to represent something like 10 to 15 percent of petitioners in domestic violence proceedings. And yet, when I propose a randomized study, the criticism is, “Why are you denying petitioners access to legal representation?” You’re suggesting that it’s not a problem of denial; it’s a problem of scarcity.

Glennerster: Yes. It’s a problem of scarcity. That’s why we don’t get objections from people on the ground. They understand about scarcity. The other thing that we should recognize is that we don’t know if these programs are good. Indeed, they may actually be causing harm. The idea that we should be going out there and giving them to people when they might actually cause harm—that’s potentially more unethical. We have actually had cases where, to our surprise, programs had negative effects, which just underlines the point. If we go out there and do programs that we think will work but actually have negative effects—if we don’t evaluate them—then that’s a serious problem. If we want to be ethical, we better know that what we’re doing is effective.

If people hear the idea of RCTs in the abstract, they tend to get scared. If you can give examples where things that people thought they knew, you’ve overturned, it is easier to learn that there’s value in evaluating things.

Greiner: Would you say there may be certain circumstances, especially where there are high stakes, where there might be an ethical imperative to find out, evaluate, research, and understand what’s going on through an evidence-based method like an RCT?

Glennerster: I may not want to go that far. Do I say you should always do an RCT before you intervene? No. But I think there is an ethical imperative for people who are designing these interventions to think very hard about what the evidence says behind what they’re doing and to be aware that just having good intentions doesn’t always mean that you are doing good. You should take into account and think about your ethical responsibility to know what you’re doing before you intervene in other people’s lives.

Greiner: The Access to Justice Lab is attempting to revolutionize U.S. law the same way that J-PAL revolutionized development in the sense that we are trying to turn it into an evidence-based field. By producing evidence of our own, we want to bring about a transformation where people reconsider some of the ethical objections and discomfort they might have had with randomized control trials in the law and then use that evidence in setting new policies or questions. We want to get to a place in the legal profession where people are not terrified of intervening, but instead are thinking hard about evaluation and figuring out whether those programs work. What advice from J-PAL’s experience do you have for the Access to Justice Lab so that we can be as effective as J-PAL has been?

Glennerster: One piece of advice is to just start doing it. Then you’ve got some good cases to talk about. If people hear the idea of RCTs in the abstract, they tend to get scared. If you can give examples where things that people thought they knew, you’ve overturned, it is easier to learn that there’s value in evaluating things. If you find something that is underfunded works really well and that gets more funding, well, then everyone loves you!

The other thing that I think is really important is when you get down to working with the practitioners who are facing the scarcity constraints, they can be really open to evaluation. Because they don’t know where to put their next person, they tend to value this kind of rigorous intervention because they know they don’t know all the answers. Sometimes there are people at more abstract levels who like to think they know all the answers, but most of the people on the ground level know that they don’t know all the answers. And they could desperately do with some help in figuring out where to put those scarce resources. They understand they’re facing scarcity. They know that there are real tradeoffs, and they don’t know how to deal with them. Those people are our most important allies. They want to help in the best way possible.


Rachel Glennerster is the executive director of the Abdul Latif Jameel Poverty Action Lab (J-PAL) at the Massachusetts Institute of Technology. She also serves as the scientific director for J-PAL Africa and cochair of J-PAL’s Education Sector.

Jim Greiner is the faculty director of the Access to Justice Lab and the Honorable S. William Green Professor of Public Law at Harvard Law School.