Few development interventions have been subjected to as much rigorous evaluation as conditional cash transfers. Since Mexico's PROGRESA program launched in 1997, CCTs have spread to over sixty countries and generated hundreds of experimental and quasi-experimental studies. We now have an unusually rich evidence base—one that allows us to move beyond asking whether CCTs 'work' and toward understanding when, how, and for whom.

The accumulated evidence reveals a pattern that should inform how we think about program design. Some outcomes show remarkably consistent positive impacts across diverse contexts. Others depend heavily on implementation details, local conditions, and complementary investments. And some of the most important questions—particularly around long-term human capital formation—remain genuinely unresolved despite years of research.

This synthesis draws on systematic reviews, meta-analyses, and key individual trials to separate robust findings from context-dependent results. The goal is not to render a verdict on CCTs as a policy instrument, but to identify where the evidence is strong enough to guide program design and where honest uncertainty should temper our confidence. For practitioners designing or scaling transfer programs, understanding this distinction matters enormously.

Consistent Impacts: Where the Evidence Is Strongest

Three outcome domains show sufficiently consistent effects across studies to be considered robust findings. School enrollment increases are perhaps the most replicated result in development economics. Meta-analyses consistently find enrollment effects of 5-10 percentage points, with larger impacts at educational transition points—particularly the shift from primary to secondary school. These effects appear in Latin America, Sub-Saharan Africa, and South Asia, across programs with varying designs.

Preventive health service utilization tells a similar story. Programs that condition on health clinic visits for children and pregnant women reliably increase those visits. The magnitudes vary, but the direction is consistent. Growth monitoring, vaccination rates, and prenatal care all show positive effects in the majority of studies. This matters for health outcomes that depend primarily on contact with the health system rather than on the quality of care received.

Consumption smoothing represents a third area of consistent impact. Households receiving regular transfers show reduced consumption volatility and improved food security. This shouldn't surprise us—giving poor households money predictably increases their purchasing power. But the evidence confirms that transfers are not simply consumed immediately; they enable families to maintain consumption during shocks and invest in productive assets.

What unites these consistent findings is their proximity to program mechanics. Enrollment and health visits are directly conditioned behaviors. Consumption is directly affected by cash receipt. The causal chain from intervention to outcome is short and straightforward. This pattern suggests a principle: CCT effects are most reliable where the pathway from transfer to outcome involves few mediating steps.

The consistency of these findings has important implications for program justification. If the goal is increasing school enrollment or preventive health contacts, the evidence base is strong enough to support CCT implementation with reasonable confidence. The remaining questions concern cost-effectiveness comparisons with alternative interventions, not whether effects exist.

Takeaway

CCT effects are most reliable where the causal chain from cash receipt to outcome is short and direct—enrollment, health visits, and consumption respond consistently because they're close to the intervention mechanism.

Conditionality Debates: What Drives Differential Effects

The defining feature of CCTs—requiring specific behaviors in exchange for transfers—has been directly tested against unconditional alternatives in numerous experiments. The evidence here is more nuanced than either CCT advocates or critics often acknowledge. Conditions do appear to matter for some outcomes, but the magnitude of their additional impact depends heavily on context.

Head-to-head comparisons generally find that conditionality produces larger effects on the conditioned behaviors themselves. Morocco's Tayssir experiment found school participation effects for conditional transfers that were roughly double those of unconditional transfers. Similar patterns appear in studies from Burkina Faso, Malawi, and Indonesia. When you pay people specifically to do something, more of them do it than when you simply give them money.

But this finding requires careful interpretation. First, the unconditional transfers in these studies still produce substantial effects—often 60-80% of the conditional impact. The marginal contribution of conditionality may not justify its administrative costs in all settings. Second, unconditional transfers sometimes outperform conditional ones on outcomes not tied to conditions, suggesting households make reasonable allocation decisions when given flexibility.

The mechanisms driving conditionality effects remain debated. Three hypotheses have empirical support: conditions may change household budget allocations by labeling money for specific purposes; they may shift intrahousehold bargaining power toward women and children; and they may provide social justification for investments in children that households already wanted to make but faced community pressure against.

Program implementation capacity emerges as a crucial moderator. Conditions only function if they're monitored and enforced. In settings with weak administrative infrastructure, the difference between conditional and labeled unconditional transfers may be minimal in practice. This suggests that optimal program design should account for state capacity—conditions may add value primarily where governments can credibly verify compliance.

Takeaway

Conditionality produces additional effects, but often modest ones relative to unconditional transfers. The value of conditions depends on administrative capacity to enforce them and whether the marginal behavioral change justifies monitoring costs.

Long-Term Learning Outcomes: The Ambiguous Evidence

Here the evidence becomes genuinely troubling for CCT proponents. While enrollment effects are robust, impacts on learning outcomes are inconsistent and often small. Multiple studies find no significant effects on test scores despite substantial enrollment gains. The uncomfortable implication is that getting children into school doesn't guarantee they learn more.

Several mechanisms may explain this pattern. Marginal students brought into school by CCTs may differ systematically from those who would have enrolled anyway—they may face greater learning barriers or receive less household support. Schools may fail to adjust to increased enrollment, leading to overcrowding and reduced per-student resources. And enrollment itself is a blunt instrument; time-in-school doesn't equal engaged learning time.

The longest-term follow-up studies present a mixed picture. Nicaragua's evaluation found modest but significant effects on completed schooling and earnings ten years after program inception. Colombia's Familias en Acción showed similar patterns. But other programs show minimal long-term impacts, and even positive findings rarely exceed the simple effect of additional years of schooling. Evidence of enhanced human capital beyond enrollment duration remains elusive.

This pattern points to a fundamental limitation of demand-side interventions. CCTs address barriers related to household resources and opportunity costs. They don't address supply-side constraints: teacher quality, curriculum relevance, school infrastructure, or pedagogical practice. Where these constraints bind tightly, additional enrollment may not translate to additional learning.

The evidence suggests CCTs work best as complements to supply-side investments, not substitutes for them. Programs that combine conditional transfers with school quality improvements show more promising learning results. This has significant implications for cost-effectiveness calculations—attributing learning gains to CCTs alone likely overstates their contribution in settings where complementary investments drove actual skill acquisition.

Takeaway

Enrollment is not learning. The gap between CCT effects on school attendance and effects on actual skill acquisition reveals the limits of demand-side interventions when supply-side constraints remain unaddressed.

Fifteen years of evidence leaves us with a clear picture of what CCTs reliably accomplish and honest uncertainty about what they don't. They consistently increase school enrollment, preventive health service use, and consumption stability. The additional impact of conditionality over unconditional transfers is real but context-dependent. And the translation of enrollment gains into learning and long-term human capital remains the critical knowledge gap.

For program designers, these findings suggest several principles. Choose CCTs when the binding constraint is household resources and opportunity costs, not service quality. Invest in administrative capacity before imposing complex conditions. And pair demand-side transfers with supply-side investments if learning outcomes matter.

The evidence base is mature enough to guide design choices, but not mature enough to promise transformational impacts on poverty. CCTs are a useful tool—not a silver bullet. Acknowledging this distinction is what evidence-based practice requires.