Development programs designed to help the poor face a deceptively simple problem: identifying who the poor actually are. Governments and aid agencies spend billions on cash transfers, subsidized food, health insurance, and housing assistance, all premised on the assumption that benefits can be directed to those who need them most.
The evidence tells a humbling story. Across dozens of rigorous evaluations spanning Latin America, Africa, and South Asia, targeting systems routinely exclude 30 to 50 percent of genuinely poor households while including substantial numbers of non-poor beneficiaries. The administrative machinery built to achieve precision often produces something closer to a lottery.
This matters beyond accounting. When targeting fails, the poorest are systematically left out of programs designed for them, while political capital is spent defending systems that underperform simpler alternatives. Understanding why targeting is so difficult—and what the evidence suggests about alternatives—is essential for anyone designing or evaluating anti-poverty programs.
The Methods and Their Hidden Assumptions
Three dominant approaches compete for practitioners' attention. Means testing verifies income or assets directly, an approach feasible mostly in economies with formal employment and tax records. In settings where most households work informally and earnings fluctuate seasonally, direct income verification collapses under its own assumptions.
Proxy means testing (PMT) has become the dominant method in developing countries. Enumerators score households on observable indicators—roof material, livestock ownership, education levels—and a regression model predicts consumption. Programs like Mexico's Prospera and Indonesia's PKH rely on variants of this approach. Its appeal is objectivity; its weakness is that predicted consumption is not actual consumption.
Community-based targeting delegates selection to villagers who presumably know their neighbors' circumstances. This method can capture information outsiders cannot observe, but it also imports local power dynamics. Evidence from Indonesia's Alatas et al. study shows community targeting sometimes matches PMT accuracy—and sometimes systematically favors the socially connected.
Each method rests on assumptions rarely stated aloud: that poverty is stable enough to measure at one point in time, that selected indicators correlate tightly with welfare, and that implementation fidelity matches program design. Violations of any assumption degrade accuracy faster than practitioners typically acknowledge.
TakeawayNo targeting method is neutral. Each embeds assumptions about how poverty manifests and who can observe it, and those assumptions fail quietly when the context does not match the model.
The Arithmetic of Inclusion and Exclusion
The empirical record on targeting accuracy is sobering. A systematic review by Brown, Ravallion, and van de Walle across nine African countries found that even well-designed PMT systems typically exclude 40 to 60 percent of the poor when attempting to reach the bottom quintile. The trade-off between inclusion and exclusion errors is mathematically unavoidable at tight eligibility thresholds.
Consider the mechanics. If a PMT model explains 50 percent of variation in consumption—which is considered good performance—then half the variation is noise. Households near the eligibility cutoff are essentially classified by coin flip. Tightening the threshold to reduce inclusion errors automatically increases exclusion errors, and vice versa.
Dynamics compound the problem. A household surveyed during a good harvest may fall into poverty months later; another household may recover. Most programs recertify every three to five years, meaning targeting snapshots become progressively outdated. In contexts with high consumption volatility, even a perfect one-time classification delivers imperfect ongoing targeting.
What this means in practice: a program claiming to serve the poor may deliver roughly half its benefits to non-poor households while missing half the intended population. Comparing this to universal or geographically targeted alternatives—rather than to a hypothetical perfect system—often yields uncomfortable conclusions about whether the targeting effort was worthwhile.
TakeawayTargeting accuracy is bounded by how well poverty can be predicted from observables. Beyond a ceiling set by data quality, additional precision costs more than it delivers.
The Costs Hidden Beneath Precision
Administrative costs of targeting are substantial but often underreported. Registration surveys, verification visits, grievance systems, and periodic recertification can consume 5 to 15 percent of program budgets. In smaller programs, targeting overhead sometimes exceeds the transfer value delivered to marginal beneficiaries.
Social costs receive even less attention. The application process itself imposes time, transport, and documentation burdens that fall disproportionately on the poorest—the very households targeting is meant to reach. Evidence from Peru and India shows that indigenous populations, the disabled, and female-headed households face the highest exclusion rates, partly because application friction compounds existing disadvantage.
Targeting also reshapes social relations in ways rarely measured. Programs that divide communities into beneficiaries and non-beneficiaries can erode solidarity, fuel accusations of corruption, and create political vulnerabilities that shorten program lifespans. Universal programs avoid these frictions but are dismissed as expensive—a judgment that rarely accounts for the full costs of the targeted alternative.
None of this argues against targeting categorically. It argues for honest accounting. When the full costs of precision are compared against its marginal benefits, the case for simpler designs—geographic targeting, categorical eligibility, or universal provision within poor regions—often strengthens. The question is not whether to target but whether the additional precision justifies its price.
TakeawayPrecision is not free. Every layer of targeting machinery extracts administrative, psychological, and social costs that must be weighed against the benefits of finer classification.
Targeting is a tool, not a virtue. Its legitimacy depends on whether it delivers more benefit to the poor than simpler alternatives would, net of all costs. The evidence suggests this threshold is met less often than program design documents assume.
The practical implication is not to abandon targeting but to hold it to the same empirical standard we apply to other interventions. When PMT accuracy is low, transfer values are small, or administrative capacity is thin, the case for universalism within poor areas becomes stronger.
Good development practice means resisting the intuitive appeal of precision and asking instead what actually reaches the poor. That question yields different answers in different contexts—which is precisely why context, not orthodoxy, should drive design.