Staff at the Mulago Foundation recently commented on the results of IPA’s impact evaluation of GiveDirectly’s cash transfer program. Broadly speaking they see the results as “important” but think the media have overhyped them. As an organization, we are skeptical of nothing more than hype.
Our intention from the start has been to search for well-documented, independently verified solutions to extreme poverty, not the trendy intervention of the day. Indeed, we encourage all donors to hold development organizations to a higher bar by asking three basic questions:
- What is the evidence base for the intervention’s long-term impact?
- Is there 3rd party experimental evidence of the specific program’s success?
- How exactly will a marginal dollar get spent?
The unfortunate reality is that too few organizations currently address these questions. We’re proud that we do and that as a result anyone can weigh in on the results.
Are cash transfers an experiment? The authors refer to GiveDirectly as an “experiment” and urge caution in pursuing untested interventions. We too urge caution in pursuing untested interventions; this is precisely the reason we founded GiveDirectly, which is anything but an experiment. We believe there is deeper, more conclusive evidence on the effectiveness of cash transfers than almost any other development intervention we can find.
Specifically, cash transfers have been called the “among the most thoroughly researched interventions” by the UK’s Foreign, Commonwealth & Development Office (FCDO), with dozens of academic publications across more than 13 countries documenting meaningful impact. Cash transfers are one of a handful of development interventions whose impact has been studied experimentally beyond three years, with recent studies finding annual rates of return of 40% after 4 years in Uganda (Blattman et al, QJE, 2013) and 80% after 5 years in Sri Lanka (de Mel et al, Science, 2012). And GiveDirectly is one of the few non-profits to have been evaluated in a randomized control trial by a third party – in our case, Innovations for Poverty Action. Even organizations that have been externally evaluated have, sadly, often kept that fact secret and released the results only if they proved favorable. We’re proud to have done differently, pre-announcing the evaluation and binding ourselves to an honest discussion of the results before we ever saw them.
Are the impacts meaningful? A second concern of the Mulago authors is about the magnitude of individual effects. In some cases this reflects misinterpretation of the results (details below). The bigger issue is what rigorously tested alternatives are available. For example, the authors are concerned about the 28% monetary rate of return that GiveDirectly beneficiaries see half a year after transfers ended. To be concrete, this rate implies that an average recipient who received $500 would more than double that initial transfer within three years, a strong result. The authors claim that other interventions can achieve even higher returns – up to 300%, a figure that exceeds even the largest estimates of returns to capital in Sub-Saharan Africa (e.g. Udry & Anagol, AER 2006). To our knowledge, however, none of these claims has been substantiated by a credible, independent, experimental evaluation. If and when they are we will of course be keen advocates.
What does “success” mean? Third, the Mulago post highlights important questions about the definition of success. In their post, for example, they explicitly put most emphasis on earnings impacts while putting none at all on reductions in violence against women. Other donors may feel differently. This is precisely why IPA’s study was designed to measure and publish a really large set of outcomes: any donor with any objective could get a complete picture and then decide for themselves whether to give. And the poor themselves may feel differently. Recipients in Western Kenya spent relatively little money on health or education, while recipients in other contexts have used unconditional cash transfers to sharply reduce HIV and HSV-2 infection rates and increase schooling (Baird, Garfein, McIntosh and Ozler 2012; Benhassine, Devoto, Duflo, Dupas and Pouliquen 2013). A broad lesson from this body of research is that poor people do things with money that vary depending on their needs and circumstances.
As we’ve said before and said publicly, cash transfers are not the single solution to poverty. In implementing them we have tried to adhere to high standards of transparency and evidence that are far too rarely met. We hope more organizations will do the same, so that we can all talk not just about GiveDirectly’s evidence, but instead, the relative merits of different interventions and organizations. It is this conversation that will move us beyond rhetoric and hype to credible, measurable, and long-lasting impact. We look forward to having it.
Some specific issues of interpretation:
Estimating the persistence of treatment effects. The Mulago post raises the sensible question of whether impacts might fade out over time after the initial transfer. It completely omits discussion, however, of the evidence presented in IPA’s evaluation on this point. Those data show that over the range of time lags studied effects were generally persistent and in some cases tended to increase, not decrease, with time.
Comparing magnitudes across different kinds of intervention. The Mulago post compares increases in income attributable to cash transfers to those claimed by other interventions designed to increase incomes. This is an extremely low bar to apply since, by design, cash transfers give poor people the flexibility to use money for purposes other than income generation. For example, recipients spent an average of 18% of their transfers on roofing investments which are cost-saving rather than income-generating. Including this spending in the denominator of a return on investment calculation effectively treats that 18% of spending as pure waste.
Distinguishing profit and cash flow. The Mulago post treats expenditure on self-employment activities as flow costs. In practice these expenditures include both flow costs and capital expenditures (e.g. inventory expansion), which is why IPAs study estimates effects on cash in and cash out separately. (The study’s use of the word “profit” is poorly chosen as the concept being estimated is in fact net cash flow.) The Mulago post similarly misinterprets the ratio of revenue to expenditure in various sub-categories as a measure of return on capital, which it is not.
Calculating returns on metal roofs. IPA’s full study estimates that the average cost of a metal roof among households who bought one was $564, and the average annual savings from repairing and replacing thatch was $107, for a simple rate of return of 19% (all figures PPP). The Mulago post appears to include only replacement costs, as cited in IPA’s shorter policy brief, and thus estimates a lower rate of return.