Publication Bias in Development: The Evidence We Never See
Selective publication inflates what works and hides what doesn't—distorting development policy at scale
Development RCTs in Fragile States: Adapting Methods for Difficult Contexts
Rigorous impact evaluation doesn't stop at the border of conflict—it adapts or fails the people who need it most.
Measuring What Matters: Choosing Outcomes That Capture Real Impact
Why rigorous evaluations of the wrong outcomes may be worse than no evaluation at all
Multifaceted Poverty Interventions: Does Bundling Boost Impact?
When combining anti-poverty interventions helps, when it doesn't, and what the experimental evidence actually shows
Cash vs. In-Kind Transfers: What Experimental Evidence Reveals
Experimental evidence reveals when cash outperforms in-kind aid—and the narrow conditions where it doesn't
Process Evaluation: The Missing Piece of Impact Evidence
Impact evaluations tell you what worked—process evaluations tell you everything else you need to know
Health Worker Incentives: Performance Pay in Development Health Systems
Experimental evidence reveals when health worker incentives improve care—and when they merely reshape what gets counted
Heterogeneous Treatment Effects: The Averages Hide Everything
Average treatment effects collapse entire distributions into a single number—and development policy suffers for it
Community-Driven Development: Participation Theory Meets Experimental Reality
RCT evidence reveals when community participation genuinely improves development outcomes and when it merely performs inclusion
Targeting Efficiency: Should Programs Focus on the Poorest?
Precision targeting sounds efficient—until you measure what it actually costs in exclusion, administration, and lost coverage
Implementation Science: The Missing Link Between Evidence and Impact
Why proven development interventions fail at scale—and how implementation science frameworks can close the gap between evidence and impact.
Private Sector Development Programs: Why Business Support Evidence Remains Weak
Billions spent on business training and microfinance—yet rigorous evidence keeps showing effects that are small, temporary, or invisible.
Survey Design for Impact Evaluation: Reducing Measurement Error at the Source
Rigorous impact evaluation begins not with randomization but with survey instruments that measure what they claim to measure.
Difference-in-Differences: Extracting Causality from Observational Data
When randomization isn't possible, difference-in-differences offers a rigorous path to causal evidence—if you respect its assumptions.
Microfinance's Broken Promise: How RCTs Rewrote Development Orthodoxy
When randomized trials finally tested development's favorite intervention, the results demanded a complete rethinking of what credit can achieve.
Conditional Cash Transfers: What Fifteen Years of Evidence Actually Shows
Separating robust findings from honest uncertainty in the most-studied development intervention
External Validity: Can Results from Rural Kenya Predict Outcomes in Urban India?
Why promising development evidence often fails to travel, and how to predict when it will
The Hawthorne Effect in Development: When Observation Changes Everything
Rigorous evaluation may systematically inflate results that disappear when programs scale beyond intensive observation.
Randomization Failures: When Your Control Group Isn't Really a Control
How contamination, attrition, and selection problems silently invalidate your experimental estimates
Regression Discontinuity: Sharp Evidence from Arbitrary Cutoffs
Arbitrary eligibility thresholds transform administrative rules into natural experiments, delivering rigorous causal evidence when randomization proves impossible.
Spillover Effects: The Hidden Force Distorting Your Treatment Estimates
Why your control group may already be treated—and how this hidden contamination systematically underestimates program effectiveness across development interventions.
Why Most Development Programs Fail Before Implementation Even Begins
The hidden design failures that doom development interventions before beneficiaries are ever reached—and the institutional pressures that make them inevitable.
Power Calculations: The Math That Separates Credible Evidence from Noise
Master the statistical foundations that determine whether your impact evaluation produces actionable evidence or expensive, misleading noise.
Pre-Analysis Plans: Credibility Insurance or Bureaucratic Burden?
Why committing to analytical decisions before seeing results transforms development research credibility without sacrificing scientific discovery.
The Take-Up Problem: Why Eligible Beneficiaries Don't Participate
Understanding why eligible beneficiaries don't enroll—and how evidence-based design can close the participation gap in development programs.
Cost-Effectiveness Analysis: The Metric Development Needs But Rarely Uses
Development organizations commission rigorous impact evaluations then ignore the cost data that determines whether interventions actually deserve funding over alternatives.
The General Equilibrium Problem: Why Scaling Changes Everything
Why rigorous pilot results can mislead when interventions grow large enough to reshape the markets they operate within.