For decades, the development community has operated on a fundamental assumption: poor people need experts to decide what they need. We've built elaborate systems to deliver food, construct wells, distribute bed nets, and train farmers—always with well-intentioned outsiders determining what help looks like.
Then researchers started running randomized controlled trials comparing these complex interventions to something remarkably simple: just giving people money. The results have been consistently surprising to donors and development professionals alike, though perhaps not to the recipients themselves.
The evidence base now spans dozens of countries and hundreds of studies. Cash transfers don't just work—they frequently outperform programs that cost far more to deliver. This creates an uncomfortable question for an industry built on expertise: what if the best thing we can do is get out of the way?
The Evidence Paradox: Rigorous Studies Challenge Conventional Wisdom
When GiveDirectly began running randomized controlled trials on unconditional cash transfers in Kenya in 2011, many development professionals predicted failure. Recipients would waste money on alcohol and tobacco. Men would squander funds meant for families. Short-term consumption would crowd out investment. The poor, the thinking went, were poor partly because they made poor decisions.
The data told a different story. A 2013 study by Johannes Haushofer and Jeremy Shapiro found that recipients invested in durable goods, livestock, and home improvements. Alcohol and tobacco consumption didn't increase. Mental health improved. Children's nutrition got better. These weren't one-off findings—they replicated across Sub-Saharan Africa, Latin America, and South Asia.
A systematic review by the Overseas Development Institute examined 165 studies and found cash transfers consistently improved education, health, and economic outcomes. The World Bank's analysis of 56 programs in low and middle-income countries found no systematic evidence that cash transfers increase spending on temptation goods.
The paradox is institutional rather than empirical. Despite overwhelming evidence, donor skepticism persists. Development organizations continue funding complex interventions with weaker evidence bases while cash programs struggle for resources. This gap between evidence and practice reveals how deeply the development industry's identity is tied to delivering expertise rather than resources.
TakeawayWhen rigorous evidence consistently contradicts institutional assumptions, the problem usually lies with the assumptions rather than the evidence. Effective development practice requires updating beliefs based on data, even when findings challenge professional identity.
Recipient Knowledge: Local Context Beats External Expertise
A farmer in rural Uganda knows things that no development economist in Washington can know. She knows which of her children is closest to dropping out of school. She knows whether her roof will survive another rainy season. She knows which neighbor might sell a goat at a fair price. She knows her husband's reliability with money.
Traditional aid programs ignore this distributed knowledge. A nutrition program delivers fortified flour to families who needed school fees. A well-construction project serves a village that actually needed a road to market. A livestock program provides chickens to households that lack secure storage. External assessments, however sophisticated, cannot capture the granular priorities that families understand intuitively.
Cash transfers harness this local knowledge automatically. Recipients allocate resources based on their actual constraints and opportunities—information that would cost millions to gather through surveys and assessments. Economist Abhijit Banerjee's research demonstrates that the poor are sophisticated economic actors, constantly making complex trade-offs with limited resources.
This doesn't mean cash always wins. Certain goods have genuine externalities that individual households won't fully account for—vaccines that prevent disease transmission, education that benefits whole communities, infrastructure that requires coordination. But for most poverty-reduction goals, the question isn't whether recipients have good judgment. The evidence suggests they typically have better judgment about their own lives than outsiders do.
TakeawayDevelopment expertise is valuable for understanding systemic constraints and designing enabling environments, but it rarely exceeds recipients' knowledge of their own priorities. Effective programs leverage local knowledge rather than substituting external judgment for it.
Program Cost Comparison: Following the Money
Consider a hypothetical $100 donation to a traditional food aid program. International shipping might claim $15-20. Warehousing and logistics take another $10-15. Local distribution networks require vehicles, staff, and coordination—perhaps another $20. Administrative overhead for compliance, reporting, and management absorbs $15-25. The food that actually reaches a hungry family might represent $30-40 of the original donation.
Cash transfer programs invert this equation. Mobile money systems can deliver funds for 2-5% transaction costs. Administrative overhead for unconditional transfers runs 5-15%. GiveDirectly reports delivering roughly 83 cents of every donated dollar directly to recipients. The comparison isn't even close.
Traditional programs justify higher overhead by arguing that in-kind assistance provides value beyond market prices—expertise, quality assurance, coordination benefits. Sometimes this is true. Vaccination campaigns require cold chains and trained health workers. Infrastructure projects need engineering expertise. Emergency response demands rapid logistical capacity that markets can't provide.
But much development programming doesn't meet this bar. Job training programs, agricultural extension services, and livelihood interventions often deliver benefits that recipients could have purchased themselves—at lower cost and better matched to their actual needs. The honest question for any development program is whether the expertise and coordination it provides exceeds the value lost to overhead. For many interventions, the evidence suggests it doesn't.
TakeawayBefore designing any development program, calculate what percentage of resources actually reaches beneficiaries versus what funds the delivery system itself. If overhead exceeds 40%, the burden of proof should fall on demonstrating why direct transfers wouldn't work better.
The cash transfer evidence doesn't invalidate all traditional development work. Public goods, coordination problems, and genuine externalities still require programmatic intervention. Markets fail. Governments need strengthening. Infrastructure requires collective action.
But cash transfers have fundamentally shifted the burden of proof. Any development intervention must now answer a basic question: why not just give people the money instead? Programs that can't demonstrate superior outcomes relative to their costs face a legitimacy problem.
The deeper lesson is about humility. Development progress comes not from having better answers than poor people about their own lives, but from removing the constraints that prevent them from implementing solutions they already know.