The arms control architecture that stabilized the latter half of the twentieth century was engineered for a specific kind of weapon: large, expensive, observable, and produced by a handful of state actors. Treaties like SALT, START, and the INF agreement worked because their objects of control were countable, their signatures detectable, and their development cycles measured in decades.
That world is dissolving. Autonomous weapons systems blur the line between conventional munition and software update. Cyber capabilities exist as code that can be copied infinitely without observable signature. Hypersonic glide vehicles compress decision-making windows to minutes. Artificial intelligence enters every layer of the kill chain, from surveillance to targeting to command. None of these technologies fits neatly into the verification regimes inherited from the Cold War.
The institutional question is not whether existing frameworks need adaptation, but whether they can be adapted at all, or whether the era of comprehensive bilateral and multilateral arms control has structurally ended. This article examines three challenges confronting governance designers: the verification gap created by intangible and dual-use technologies, the temporal mismatch between innovation cycles and diplomatic processes, and the historical pathways through which weapons norms have emerged. The argument is not pessimistic, but it is sober: new institutional forms will be required, and they will not look like the treaty regimes we have known.
The Verification Technology Gap
Verification has always been the load-bearing wall of arms control. Without confidence that adversaries are abiding by their commitments, no agreement survives domestic political scrutiny. The genius of Cold War-era treaties lay in matching the verifiability of weapons to the architecture of trust: missile silos could be counted from satellites, warheads inspected on-site, and test detonations detected by seismic networks.
Emerging technologies systematically violate these enabling conditions. An autonomous weapons system may be physically indistinguishable from its remotely piloted predecessor; the difference resides in software that can be uploaded, removed, or altered in hours. Cyber weapons leave no inventory to count. AI-enabled targeting capabilities exist as model weights and training data that can be replicated infinitely and concealed within civilian infrastructure.
The dual-use problem compounds this opacity. The same semiconductor fabrication, machine learning research, and commercial drone technology that powers civilian economies also enables military applications. Verification regimes cannot simply count facilities or restrict exports without crippling legitimate scientific and economic activity. The Wassenaar Arrangement's struggles with cyber export controls illustrate how traditional control mechanisms become either porous or overbroad when applied to digital goods.
Institutional designers face a choice between two imperfect paths. The first involves shifting verification from artifacts to behaviors, monitoring patterns of use, deployment, and effects rather than capabilities themselves. The second involves embedding verification deep within the technology stack itself, through cryptographic attestation, hardware-rooted trust, and algorithmic audit mechanisms.
Both paths require unprecedented technical cooperation among actors with strong incentives to defect. They also require expanding the definition of legitimate inspection beyond state militaries to include private firms, research consortia, and civil society auditors. The verification regimes of the future, if they emerge, will be hybrid public-private architectures unlike anything the IAEA or OPCW has attempted.
TakeawayWhen weapons become software, verification must shift from counting things to observing behaviors. The institutions that succeed will be those that can audit code, capability, and conduct simultaneously.
The Speed of Innovation Problem
Treaty negotiation operates on diplomatic time. The Nuclear Non-Proliferation Treaty took roughly six years from initial proposal to entry into force. The Chemical Weapons Convention required nearly two decades of negotiation. These timelines were tolerable when the underlying weapons technologies themselves matured over generations.
Contemporary technological cycles operate on radically different clocks. Foundation model capabilities now double in roughly six to ten months by some measures. Hypersonic test programs progress from concept to deployment within a single diplomatic negotiation cycle. By the time a treaty addressing a particular generation of capability is drafted, debated, signed, and ratified, the technology it governs may be three generations obsolete or have migrated into entirely new domains.
This temporal mismatch creates what we might call governance lag, a structural condition in which institutional outputs are perpetually behind the problems they were designed to address. The standard diplomatic response, accelerating negotiations, runs into hard constraints. Sovereignty concerns, domestic ratification processes, and the inherent need for consensus in multilateral fora cannot be compressed indefinitely without sacrificing legitimacy.
Institutional adaptation may require decoupling the negotiation of principles from the negotiation of specifics. Framework conventions establishing broad obligations could be paired with rapidly amendable technical annexes, drafted by expert bodies and adopted through streamlined procedures. The Montreal Protocol's adjustment mechanism offers a partial template, allowing signatories to update specific phase-out schedules without reopening the underlying treaty.
Equally important is moving from treaty-based regulation toward what scholars of regime complexity call experimentalist governance: provisional rules, continuous monitoring, structured learning across jurisdictions, and iterative revision. This represents a profound cultural shift for arms control communities accustomed to the finality of signed agreements, but it may be the only institutional form capable of keeping pace with the technologies it seeks to govern.
TakeawayStatic treaties cannot govern dynamic technologies. The future of arms control lies in framework agreements with living technical annexes, designed to be revised faster than the weapons they regulate.
Norm Development Pathways
Not all weapons restrictions originate in treaties. Some of the most durable constraints in international security, the taboo against chemical weapons use, the de facto prohibition on nuclear first use, the stigma surrounding anti-personnel landmines, emerged through normative processes that ran parallel to and sometimes ahead of formal legal codification.
The historical record reveals a recurring pattern. Norm entrepreneurs, often coalitions of states, civil society organizations, scientific communities, and affected populations, identify a particular weapon as categorically illegitimate. They construct narratives emphasizing humanitarian harm, strategic instability, or violation of fundamental principles. These narratives gradually reshape what counts as acceptable state behavior, sometimes generating treaties as a downstream codification of norms already widely internalized.
The Ottawa Process on landmines and the Convention on Cluster Munitions both illustrate this pathway. Coalitions bypassed traditional consensus-based forums, generated normative pressure through smaller groups of committed states, and created instruments that have shaped behavior even among non-signatories through reputational costs and market effects.
Applying this model to emerging technologies presents both opportunities and complications. The Campaign to Stop Killer Robots has built significant normative momentum around lethal autonomous weapons, drawing on humanitarian framings developed in earlier campaigns. Yet emerging technologies differ from their predecessors in ways that complicate norm construction. Their effects are often invisible or attributable only with difficulty. Their dual-use character means that prohibition may foreclose substantial civilian benefits. Their development is distributed across thousands of private actors rather than concentrated in state arsenals.
Successful norm entrepreneurship in this domain will likely require new coalitional architectures: technologists alongside diplomats, private firms alongside states, regional bodies alongside global institutions. The norms that emerge may be narrower and more conditional than the categorical prohibitions of earlier generations, focusing on specific applications, contexts, and effects rather than entire weapons categories.
TakeawayNorms often precede treaties and outlast them. Building shared expectations about acceptable use may matter more than formal prohibition, especially when the weapons in question cannot easily be banned outright.
The institutional architecture of arms control was built for a world of countable weapons, slow innovation, and bilateral superpower rivalry. None of those conditions persists. Verification regimes designed for missiles and warheads strain against software and algorithms. Negotiation cycles measured in years cannot pace technologies measured in months.
Yet pessimism about the death of arms control mistakes the obsolescence of particular institutional forms for the obsolescence of the underlying project. The need to constrain destructive capabilities through cooperative arrangements has not diminished; it has intensified. What must change is the institutional repertoire we bring to that task.
The arms control of the coming decades will be hybrid in actor composition, experimentalist in design, and normative as much as legal. It will require diplomats fluent in code, technologists fluent in governance, and institutions capable of learning at the speed of the technologies they seek to govern. Building these capacities is the central challenge for global security architects in our time.