Artificial Intelligence and Arms Control
Artificial Intelligence and Arms Control
Developing strong, pragmatic and principled national security and defense policies.
Submission Statement
Though this paper focuses on arms control through the lens of AI-enabled measures, I found it a useful primer on the dynamics of arms control more generally. While I don't believe AI meets the six criteria to be amenable to regulation, I can see a path for certain AI applications to be regulated via treaty. For example, mandates requiring a man-in-the-loop or man-on-the-loop seem to minimally disrupt weapon effectiveness, while greatly limiting the disruptive nature or "horribleness" of autonomous killers.
Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. Megan Lamberth is a former Associate Fellow for the Technology and National Security Program at CNAS. Her research focuses on U.S. strategy for emerging technologies and the key components of technology competitiveness, such as human capital, R&D investments, and norms building.
Watts identifies six criteria that he argues affect a weapon’s tolerance or resistance to regulation: effectiveness, novelty, deployment, medical compatibility, disruptiveness, and notoriety.11 An effective weapon that provides “unprecedented access” to enemy targets and has the capacity to ensure dominance is historically resistant to regulation. There is a mixed record for regulating novel weapons or military systems throughout history. Countries have pursued regulation of certain new weapons or weapons delivery systems (e.g., aerial bombardment) while also resisting regulation for other novel military systems (e.g., submarines). Weapons that are widely deployed—“integrated into States’ military operations”—tend to be resistant to arms control. Weapons that cause “wounds compatible with existing medical protocols” in military and field hospitals are historically difficult to ban or regulate. Powerful nations have historically tried to regulate or ban weapons that are “socially and militarily disruptive” out of fear that such weapons could upend existing global or domestic power dynamics. Campaigns by civil society groups or widespread disapproval from the public can increase notoriety, making a weapon potentially more susceptible to arms control.12
Whether arms control succeeds or fails depends on both its desirability and its feasibility. The desirability of arms control encompasses states’ calculation of a weapon’s perceived military value versus its perceived horribleness (because it is inhumane, indiscriminate, or disruptive to the social or political order). Thus, desirability of arms control is a function of states’ desire to retain a weapon for their own purposes balanced against their desire to restrain its use by their adversaries.
AI technology poses challenges for arms control for a variety of reasons. AI technology is diffuse, and many of its applications are dual use. As an emerging technology, its full potential has yet to be realized—which may hinder efforts to control it. Verification of any AI arms control agreement would also be challenging; states would likely need to develop methods of ensuring that other states are in compliance to be comfortable with restraining their own capabilities. These hurdles, though significant, are not insurmountable in all instances. Under certain conditions, arms control may be feasible for some military AI applications. Even while states compete in military AI, they should seek opportunities to reduce its risks, including through arms control measures where feasible.