In 1940, France fielded one of the largest armies in the world, protected by the most expensive fortification system ever built, and commanded by generals who had won the previous war. They lasted six weeks. This was not an anomaly. It was a pattern.
Military leaders consistently misjudge the character of the next conflict they will face. Not because they are stupid—most are exceptionally capable—but because the systems that select, promote, and organize them are optimized for problems that have already been solved. The machinery of military institutions, from career incentives to procurement cycles, creates systematic distortions in how leaders perceive emerging threats.
The question is not why individual generals fail to predict the future. Nobody can do that reliably. The real question is why military organizations, with vast intelligence resources and professional planning staffs, collectively orient themselves toward the wrong war. The answer lies not in individual cognition but in the structural logic of military institutions themselves.
Career Selection Effects
Military promotion systems are remarkably good at identifying competence—but competence defined by the institution's existing understanding of war. Officers who excel in the current paradigm rise. Officers who challenge it tend to stall. This is not conspiracy; it is the natural behavior of any large organization that must evaluate thousands of people against standardized criteria.
Consider what a successful military career actually selects for. An officer must demonstrate mastery of current doctrine, perform well in exercises designed around existing assumptions, and earn the approval of superiors who themselves rose by mastering the previous paradigm. Each promotion gate reinforces alignment with institutional orthodoxy. By the time an officer reaches general rank—typically after 25 to 30 years of service—they have been filtered through a system that rewards consistency with the prevailing model of warfare.
This creates a dangerous paradox. The officers best positioned to lead in the next war are those who have most thoroughly internalized the assumptions of the last one. The mavericks who questioned doctrine, who argued for unconventional capabilities, who made their superiors uncomfortable—they were weeded out at colonel, or chose to leave. What remains at the top is not a random sample of military thinking. It is a curated selection of officers whose instincts align with established institutional wisdom.
The few exceptions prove the rule. Officers like Billy Mitchell, who championed airpower between the wars, or John Boyd, who revolutionized fighter tactics and strategic theory, are remembered precisely because they fought their own institutions—and paid professional costs for doing so. Mitchell was court-martialed. Boyd retired as a colonel. The system does not reward those who see differently; it promotes those who see the same way, only more sharply.
TakeawayPromotion systems don't just select leaders—they select a worldview. By the time an officer reaches the top, they have been shaped by decades of reinforcement toward the institution's existing assumptions about how wars are fought.
Organizational Blinders
Military organizations do not assess threats in a vacuum. They assess threats through the lens of their own capabilities, budgets, and institutional survival. This is not cynicism—it is organizational behavior so predictable that it operates almost like a physical law. Every service, every branch, every major command interprets the strategic environment in ways that validate its own existence and justify its share of resources.
During the interwar period, the U.S. Navy built its strategy around decisive fleet engagements in the Pacific—because that was what battleship admirals knew how to do and what naval shipyards knew how to build. The Army Air Corps pushed strategic bombing as the decisive instrument of future war—because independent airpower justified an independent service. Neither was entirely wrong, but both framed the future to fit their institutional needs rather than adjusting institutions to fit the likely future.
This organizational logic distorts threat assessment at every level. Intelligence estimates are shaped by what the institution wants to hear. Capability development follows the path of least bureaucratic resistance. War games are designed—often unconsciously—to produce outcomes that validate existing programs. A 2002 U.S. war game called Millennium Challenge famously had to be restarted after the opposing force commander, playing unconventionally, sank most of the Blue fleet on the first day. The exercise was restructured to ensure the preferred outcome.
The deeper problem is that these biases are largely invisible to those inside the system. When every colleague shares the same assumptions, when every briefing reinforces the same threat picture, when every promotion reward aligns with the same institutional priorities, dissenting views do not just seem wrong—they seem unreasonable. Organizational culture does not merely shape what leaders think. It shapes what they are capable of thinking.
TakeawayInstitutions don't just fight wars—they define what war looks like, and they define it in ways that justify their own structure. The most dangerous bias in strategic planning is the one nobody in the room can see because everyone shares it.
Uncertainty Management
Military planning must deal with radical uncertainty—the future character of war is genuinely unknowable in detail. But military culture has a deeply uncomfortable relationship with uncertainty. Commanders are trained to project confidence. Staff processes demand specificity. Budgets require precise justification. The entire institutional apparatus pushes leaders to convert ambiguity into false precision.
This manifests in predictable ways. Planning scenarios become single-point predictions rather than ranges of possibility. Capabilities are optimized for the most likely threat rather than hedged across multiple futures. Doctrine crystallizes into dogma. The French army's commitment to the methodical battle in the 1930s was not irrational given their experience in 1918—but it was presented and internalized with a certainty that left almost no institutional capacity to adapt when reality diverged from the plan.
The U.S. military's experience after 2001 illustrates the same dynamic from a different angle. Having spent a decade preparing for conventional high-intensity conflict, the institution found itself in protracted counterinsurgency campaigns it had deliberately chosen not to prepare for. This was not ignorance—plenty of analysts had warned about irregular warfare. But the institution's planning processes demanded focus, and focus meant choosing one future over others. The future it chose was the one that aligned with its preferred capabilities.
Genuine strategic adaptation requires something that military culture finds almost intolerable: admitting that you might be wrong about what's coming, and building forces flexible enough to handle surprises. This means accepting redundancy, maintaining capabilities that may never be used, and resisting the institutional pressure to optimize for a single scenario. It means, in essence, planning for your own ignorance—which is precisely what confident, decisive, promotion-worthy officers are least inclined to do.
TakeawayThe greatest risk in strategic planning is not choosing the wrong answer—it is being too certain about any single answer. Institutions that mistake confidence for accuracy will always be surprised, because war's defining characteristic is that it surprises.
The pattern of military misjudgment is not a failure of individual intelligence. It is a systemic outcome of how military institutions select leaders, process information, and manage uncertainty. The same organizational qualities that make armies effective in known environments—discipline, hierarchy, doctrinal coherence—become liabilities when the environment shifts.
Understanding this does not make the problem disappear. No amount of structural awareness can eliminate the fundamental uncertainty of future conflict. But recognizing that military institutions have built-in blind spots is the first step toward designing organizations that can detect and correct for them.
The generals who get the next war right will not be the most confident ones. They will be the ones who built institutions capable of being wrong gracefully—and adapting before the cost becomes catastrophic.