A drug performs brilliantly in clinical trials. The data look compelling—statistically significant improvements, clear separation from placebo, regulatory approval granted. Then it enters routine practice, and something curious happens. The benefits seem to shrink. Patients don't respond quite as expected. Clinicians wonder if they're doing something wrong.

They're not. What they're witnessing is the efficacy-effectiveness gap—the predictable divergence between how treatments perform under ideal trial conditions and how they work in the messy reality of everyday medicine. This gap isn't a flaw in science. It's a feature of how we generate evidence, with consequences we must understand.

The issue matters enormously for clinical decision-making. When a treatment shows 40% relative risk reduction in trials, what reduction should you actually expect in your patient population? Often considerably less. Understanding why helps clinicians set realistic expectations, identify which patients might genuinely benefit, and recognize when the evidence base may not apply to the person sitting in front of them.

Participant Selection Effects

Clinical trials don't recruit average patients. They recruit ideal patients—carefully selected individuals who meet narrow enrollment criteria designed to maximize the chance of detecting a treatment effect. This isn't dishonest. It's methodologically necessary to establish whether a treatment can work. But it creates populations that look quite different from typical clinical practice.

Consider a typical cardiovascular drug trial. Exclusion criteria often eliminate patients over 75, those with significant comorbidities, individuals on multiple medications, people with cognitive impairment affecting adherence, and those with unstable disease. What remains is a younger, healthier, more motivated subset. They're more likely to tolerate treatment, less likely to have competing health issues, and more capable of following complex protocols.

The consequences are predictable. Treatment effects measured in these selected populations often don't transfer to broader groups. A diabetes medication tested primarily in patients with recent diagnosis and no complications may perform differently in patients with 15-year disease duration and established nephropathy. The biology differs. The competing risks differ. The capacity to benefit differs.

This isn't a minor issue. Studies comparing trial participants to registry populations consistently find striking differences. Trial participants are typically younger, have fewer comorbidities, better baseline function, and higher socioeconomic status. When treatments move into real populations, these differences matter. The carefully measured effect size from the trial becomes an upper bound—a best-case scenario rather than a typical expectation.

Takeaway

Trial populations represent who can benefit under optimal conditions, not who will benefit in practice. The gap between these groups predicts how much real-world effectiveness will fall short of trial efficacy.

Protocol Versus Practice

Clinical trials don't just select different patients—they treat them differently. The trial protocol creates conditions that bear little resemblance to routine care. Every element is optimized: visit frequency, monitoring intensity, adherence support, dose titration schedules. These features aren't incidental. They're deliberately designed to maximize the treatment's chance of success.

Consider what happens in a typical randomized controlled trial. Patients attend frequent scheduled visits—often monthly or more. At each visit, trained staff assess adherence, adjust doses, manage side effects, and reinforce the importance of the treatment regimen. Blood tests and other monitoring occur at regular intervals. Problems are caught early. Non-adherence is identified and addressed. This level of attention simply cannot be replicated in standard practice.

The adherence difference alone is substantial. Trial adherence rates typically exceed 80%, often reaching 90% or higher with intensive support. Real-world adherence for chronic disease medications frequently falls below 50% within the first year. If a medication only works when taken consistently, this gap alone can halve the apparent treatment effect. The drug hasn't changed. The conditions of its use have.

Protocol effects extend beyond adherence. Trial patients receive more frequent dose adjustments, more aggressive management of side effects, and more systematic follow-up. They're often enrolled at specialized centers with particular expertise. The comparison group typically receives active attention too—placebo-controlled trials still involve regular monitoring. All of this inflates the measured effect relative to what happens when the treatment enters the real world, prescribed by busy clinicians to patients who may fill the prescription once and never return.

Takeaway

The infrastructure of clinical trials—frequent monitoring, adherence support, dose optimization—is itself a powerful intervention. Strip away this support, and treatment effects predictably diminish.

Effectiveness Research Design

Recognizing these limitations, researchers have developed study designs specifically intended to capture real-world treatment performance. Pragmatic trials deliberately relax the tight controls of explanatory trials. They enroll broader populations, compare treatments against usual care rather than placebo, and measure outcomes that matter to patients in routine practice.

The pragmatic-explanatory continuum describes this spectrum. At one end, explanatory trials ask: "Can this treatment work under ideal conditions?" At the other, pragmatic trials ask: "Does this treatment work in routine practice?" Most trials fall somewhere between, but explicitly pragmatic designs are becoming more common as regulators and payers demand evidence of real-world value.

Large simple trials represent one pragmatic approach—enrolling thousands of patients with minimal exclusion criteria, randomizing them to treatment strategies, and following outcomes through routine data systems. The RECOVERY trial during COVID-19 exemplified this model, rapidly generating evidence about treatments like dexamethasone by embedding randomization into routine NHS care.

Observational effectiveness research offers complementary insights. By analyzing outcomes in large healthcare databases, researchers can examine treatment effects in populations that trials never enroll. These studies face methodological challenges—confounding, selection bias, measurement error—but they capture something trials cannot: how treatments actually perform when prescribed by ordinary clinicians to ordinary patients. The ideal evidence base combines both approaches, using trials to establish efficacy and observational research to understand how that efficacy translates into effectiveness.

Takeaway

Different research questions require different designs. Knowing whether a treatment can work and whether it does work in practice are distinct questions—and the answers often differ substantially.

The efficacy-effectiveness gap isn't a scandal or a failure of science. It's an inherent feature of how we generate medical evidence. Controlled trials answer the questions they're designed to answer—whether treatments can work under favorable conditions. Expecting them to predict exactly what happens in routine practice asks too much.

Clinical wisdom lies in understanding this gap and adjusting expectations accordingly. When applying trial evidence to individual patients, consider how closely they resemble trial participants, whether the conditions of the trial can be approximated in practice, and what observational evidence suggests about real-world performance.

The goal isn't to dismiss trial evidence but to interpret it appropriately. Trial efficacy tells you what's possible. Real-world effectiveness tells you what's probable. Good clinical decision-making requires both perspectives.