And Now, the “Methods Revolution”


The cure for scurvy became known to Portuguese explorer Vasco Da Gama when, in 1498, he stopped in Mombasa, along the east coast of Africa, on his way to India – the first such maritime voyage by a European in history. The African king fed the ship’s sailors oranges and lemons, and the disease, which often can be fatal to sailors on ships that remain at sea longer than ten weeks, cleared up. The remedy became a naval secret, then a rumor, and, eventually, folk wisdom. Only in 1747, when British Navy surgeon James Lind performed his famous experiment, did it become reliable knowledge.

Lind divided twelve men suffering from similar symptoms aboard his ship into six pairs. He treated one man in each pair with one of six competing nostrums, and gave the other man nothing.  Those who received citrus juice recovered while the others did not.  It took another forty years (and the onset of a desperate war with France) for the British Admiralty to require a ration of lemon juice be provided regularly to sailors throughout the fleet. Not until the 1930s did biochemist Albert Szent-Györgyi pin down that it is ascorbic acid, AKA vitamin C, that does the trick.

Since then, the practice of inferring causation by comparing a “treatment group” receiving a certain intervention with a “control group” receiving nothing of the sort has been considerably refined. Agronomists began using the technique in the early twentieth century to improve plant yields through hybridization. Statisticians soon tackled the problem of experiment design.  The first randomized controlled trials in medicine were reported in 1948 – the effectiveness of streptomycin in treating tuberculosis.

Beginning in the 1980s, economists began adopting the technique of randomized control trials to use across a broad swathe of microeconomics, distinguishing between “natural experiments,” in which nature or history formulates the treatment and control groups, and “field experiments,” in which investigators arrange interventions themselves and then follow their effects on participants making decisions in everyday life. Early experiments with negative income taxes by others were assessed by Jerry Hausman and David Wise in 1985,; the RAND Health Insurance Experiment, analyzed by Joseph Newhouse, in 1993; a series a welfare reform experiments conducted by economic consulting firms for  the Ford Foundation  in the the ’80s and ’90s, surveyed by Charles Manski and Irwin Garfinkel in 1992; and experiments in early childhood education, especially the Perry Preschool Project, begun in 1963, and introduced to economists by Lawrence Schweinhart, Helen Barnes, and Weikart, in 1993.  An especially striking exemplar of the new approach came from  Angrist in 1990 who used the draft lottery to study  the effect of Vietnam-era conscription on lifetime earning.

Behind the scenes, of course, enabling the revolution, was the advent of essentially unlimited computing power and the software to put it to use, searching out all kinds of new data and analyzing it.  Major developments along the way were described by Hausman and Wise (Social Experimentation, 1985); James Heckman and Jeffrey Smith, “Assessing the Case for Social Experiments,” 1995; Glenn Harrison and John List (“Field Experiments,” 2004); Angrist and Jörn-Steffen Pischke (“The Credibility Revolution in Empirical Economics,” 2010); David Card, Steffano DellaVigna, and Ulrike Malmendier (“The Role of Theory in Field Experiments,” 2011); Manski (Public Policy in an Uncertain World: Analysis and Decisions, 2013); Angus Deaton and Nancy Cartwright, “Understanding and Misunderstanding Randomized Controlled Trials,” 2016;  and Susan Athey and Guido Imbens (“The Econometrics of Randomized Experiments,” 2017). Most of this can be gleaned from the first few pages of the Royal Swedish Academy of Sciences’ Scientific Background to this year’s Nobel Prize.

Confronted with this Tolstoyan sprawl, the Nobel committee earlier this month finessed the problem of allocating credit by singling out the sub-discipline of development economics as a field in which experiments is said to have shown particular promise. Recognized were Abhijit Banerjee, 58, and Esther Duflo, 46, both of the Massachusetts Institute of Technology; and Michael Kremer, 55, of Harvard University, for having  pioneered the use of randomized controlled trials (RCTs) to assess the merits of various anti-poverty interventions.

In Duflo, the committee got what it wanted: a female laureate in economics, only the second to be chosen, and a young one at that. (Elinor Ostrom, then 78, who was honored with Oliver Williamson in 2009, died two years after receiving the award.)  Duflo’s mother was a pediatrician who traveled frequently to Rwanda, Haiti, and El Salvador to treat impoverished children or victims of war, according to Herstory. Duflo herself formed a life-long obsession with India at the age of six, when reading a comic book about Mother Teresa, the Albanian nun who operated a hospice in Calcutta (now Kolkata). As a student at the Ecole Normale Superieure, Duflo switched to economics from history while working for a year in Russia, observing the work of American economic advice-givers first hand. Duflo was Kremer’s and Banarjee’s student at MIT; the university hired her upon graduation, and tenured her after Princeton sought to lure her away.

Banerjee grew up in Kolkata, the son of a distinguished professor of economics. An interview with The Telegraph  gives a vivid picture of the rich intellectual life  of Bengal. He earned his PhD from Harvard in 1988 with a trio of essays in information economics, and taught at Princeton, then Harvard, before moving to MIT.  There he founded, in 2003, with Duflo, Kremer and others, the Abdul Latif Jameel Poverty Action Lab, known colloquially as J-PAL, its researchers self-identifying as randomistas.  In 2010, he and Duflo published Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty (Public Affairs), a primer on RCTs. By then he had divorced his first wife. In 2015, he and Duflo married.

Of the three, Kremer was the pioneer. After graduating from Harvard College, in 1985,  he taught high school in Kenya for a year. Returning to Harvard to study economics with Robert Barro, in a period of great ferment, he made two durable contributions to what was then the “new” economics of growth: “Population Growth and Technological Change: 1.000,000 BC to 1990” and “The O-Ring Theory of Economic Development,” both in 1993.  With “Research on Schooling: What We Know an What We Don’t,” in August 1995, Kremer asked a series of questions; six months later, in “Integrating Behavioral Choice into Epidemiological Models of the AIDS Epidemic,” he developed a model of a different problem whose implications might be tested with a new approach: randomized control trials. Since then, he has kept up a drumbeat of influential papers – health treatments, patent buyouts, elephant conservation, vaccine-purchase commitments, the repeal of odious debt – including several with his wife, the British economist Rachel Glennerster.

“The research conducted by this year’s Laureates has considerably improved our ability to fight global poverty,” asserted the Nobel press release. “In just two decades, their new experiment-based approach has transformed development economics, which is now a flourishing field of research.” One reason it is  flourishing is the availability of a deep river of global funding:  the World Bank, the United Nations, and several major philanthropies regularly invest enormous sums in development research compared to other areas of inquiry. Those projects offering carefully designed experiments, promising reliable answers to perplexing questions, enjoy a significant advantage in the competition for research funds.

For a well-informed description of some of the work and its limitations, see Kevin Bryan’s post at A Fine Theorem, “What Randomization Can and Cannot Do.” For some sharp criticism, read “The Poverty of Poor Economics,” on Africa Is a Country’s site. (“Serious ethical and moral questions have been raised particularly about the types of experiments that the randomistas… have been allowed to perform.”) Remember, too, that problems of agricultural policy that are fundamental to poverty reduction are far beyond the reach of RCTs to deliver answers. How to escape the middle income trap? How to build a research system to reach the technological frontier?

And to be reminded that commerce routinely alleviates more poverty around the world than aid (though hardly all), read veteran Financial Times correspondent David Pilling’s recent dispatch on Africa’s increasingly dynamic interaction with the rest of the world, China in particular. “When most people think of China in Africa,” he writes, “they think of mining and construction. But things are moving on. It is no longer the highways where the main action is taking place. It is the superhighways,” he says, of e-commerce in particular.

In short, to speak of a “credibility revolution” seems to me mainly a marketing slogan; it overstates the contribution that the small steps that RCTs are delivering, compared to those of theory prior to investigation. “Methods revolution” is a more neutral term. But that said, the Nobel panel neatly solved its problems for another year. The prize for RCTs in development economics is the first step in what will surely be a series of prizes to be given for new methods-driven results. There will be many more.


Leave a Reply

Your email address will not be published. Required fields are marked *