Every year, governments and donors spend billions on teacher training programs across the developing world. The logic seems airtight: better-trained teachers should produce better-learning students. Yet when researchers actually measure what happens after these workshops end, the results are consistently disappointing.

This isn't a minor gap between expectation and reality. Systematic reviews of in-service teacher training programs find little to no measurable impact on student learning outcomes in the majority of cases. The workshops happen, certificates are issued, budgets are spent—and classrooms remain largely unchanged.

The persistence of this pattern raises an uncomfortable question for development practice. If the evidence has been accumulating for decades, why does teacher training remain the default intervention? And what does the evidence say actually works instead? The answers reveal something important about how development decisions get made—and how they could be made better.

The Training Default

When education indicators disappoint—low test scores, high dropout rates, poor literacy—the first response from ministries and donors is almost always the same: train the teachers. It's a reflex so deeply embedded in education development that it rarely gets questioned before funding is approved.

The reasons are partly institutional. Teacher training is logistically simple to organize, easy to quantify in reports ("15,000 teachers trained"), and politically uncontroversial. No one opposes investing in teachers. It fits neatly into project cycles—a two-week workshop can be planned, executed, and reported within a single fiscal year. Compare that to the messy, long-term work of reforming curricula, restructuring school management, or addressing the poverty-driven reasons children don't learn.

There's also a deep assumption at work: that teachers underperform primarily because they lack knowledge or pedagogical skills. This frames the problem as a deficit in the teacher that a training course can fill. But research increasingly suggests the binding constraints are elsewhere—in motivation, accountability, ongoing support, and the conditions teachers actually face in overcrowded, under-resourced classrooms.

The result is a self-reinforcing cycle. Training programs are easy to fund, easy to implement, and easy to report on. Their failure to produce learning gains is rarely tracked rigorously enough to disrupt the next funding cycle. By the time evaluations surface, the next round of workshops is already being planned. The training default persists not because it works, but because the institutional incentives favor it.

Takeaway

When an intervention persists despite weak evidence, look at the institutional incentives. The question isn't just "does this work?" but "who benefits from continuing to assume it does?"

What Evidence Shows

The evidence base on in-service teacher training is now substantial—and sobering. A landmark review by researchers at the World Bank examined over 100 impact evaluations of education interventions in developing countries. Traditional teacher training programs—typically multi-day workshops covering pedagogy or subject content—showed among the weakest effects on student learning of any intervention category studied.

Several patterns explain why. Most training programs are one-off events, disconnected from teachers' daily classroom reality. Teachers sit through lectures about active learning methods—delivered, ironically, through passive lectures. The content is often generic, designed at the national level without adaptation to what teachers in specific contexts actually struggle with. A 2019 meta-analysis found that training programs shorter than a cumulative 100 hours showed essentially zero average effect on student outcomes.

Even when training content is well-designed, the transfer problem is enormous. Teachers may understand a new technique in a workshop setting and still fail to implement it when facing 60 students, no materials, and an inflexible curriculum. Studies using classroom observation after training interventions consistently find that teacher behavior changes are small and fade quickly—often within weeks of the training ending.

This doesn't mean teacher quality is irrelevant. It's one of the strongest predictors of student learning. The problem is that short-term training is a remarkably poor tool for changing deeply ingrained teaching practices. Decades of cognitive science research show that complex skill change requires practice, feedback, and sustained support—none of which a workshop provides.

Takeaway

Knowing what to do and consistently doing it under pressure are fundamentally different things. Interventions that treat complex behavioral change as an information problem will almost always underperform.

More Effective Alternatives

If workshops don't work, what does? The most promising evidence points toward interventions that provide ongoing, structured support embedded in teachers' daily practice. Coaching and mentoring programs—where trained instructional coaches observe lessons and give specific, actionable feedback on a regular schedule—show significantly larger effects than traditional training across multiple rigorous evaluations.

A randomized controlled trial of a coaching program in South Africa found learning gains equivalent to one to two additional years of schooling. Similar results have emerged from programs in Brazil, Kenya, and Pakistan. The common thread isn't any specific pedagogical approach—it's the mechanism of sustained, practice-based feedback. Teachers change behavior when someone helps them apply new techniques in their actual classroom, not in a hotel conference room.

Structured pedagogy programs offer another evidence-backed approach. These combine scripted or semi-scripted lesson plans with teacher guides, student materials, and regular support visits. Critics call them overly rigid, but the evidence is hard to argue with. A systematic review found structured pedagogy programs produced learning gains roughly ten times larger than traditional in-service training. They work partly because they reduce the transfer problem—teachers don't have to figure out how to translate abstract principles into daily lessons.

Technology-aided approaches are also showing promise, particularly when they deliver micro-lessons, reminders, or peer support through teachers' phones. These aren't replacements for human coaching, but they can extend its reach. The broader principle across all effective approaches is the same: change happens through repeated, contextualized practice with feedback—not through information transfer events.

Takeaway

Effective teacher improvement mirrors how any complex skill develops: not through instruction alone, but through structured practice with ongoing feedback in the environment where the skill must actually be performed.

The gap between teacher training spending and teacher training results is one of the most well-documented failures in education development. It persists because institutional incentives reward activity over impact, and because the assumption that information changes behavior is deeply intuitive even when wrong.

The evidence points clearly toward alternatives: coaching, structured pedagogy, and sustained classroom-level support. These are harder to implement and harder to scale than workshops. But they actually work.

Development practice improves when we stop asking "does this seem reasonable?" and start asking "does this produce measurable results?" In education, the answers are increasingly clear. The question is whether funding decisions will follow.