Diplomatic Initiative to Simulate and Avoid Retaliatory Miscalculation despite the
Strategic Instability inherent in Multi-lateral Computerization of Command, Control, and Communications
At DISARM:SIMC4, (an all-volunteer Track 2 working group) we attempt to map and understand the risks of escalatory feedback loops in an increasingly digitized military landscape.
Our core function is to identify the new risks associated with the involvement of A.I. in decision support (such as automated threat detection), especially risks that could result in a devastating "flash war", an analogy with the 2010 stock market "flash crash" wherein a "flash war" is an accidental war or accidental escalation of a conventional war beyond the nuclear threshold.
Additionally, we promote policy measures such as the Decide Under Attack nuclear posture concept which will reduce the vulnerability of the great powers to being tricked into fighting a nuclear war by terrorists using a false flag. Such terrorists would create false nuclear missile warnings (via cyberattacks, A.I. data spoofing, deepfakes, and compromised personnel) to try to trick the great powers into destroying each other - which might be perceived by some radicals as a clean slate allowing the creation of a new state or caliphate.
Finally, we seek to serve as a forum where military A.I. researchers from rival powers can conduct open dialogues to propose ideas for stabilizing technical measures (such as methods of reducing the vulnerability of A.I. to spoofing) which might reduce the risk of accidents and reduce the vulnerability to false flag terrorism.
— US Trying to Get Other Countries to Agree to Eschew AI for Nuclear Command and Control (Dept of State, 2023)
— Prove It Before You Use It: Nuclear Retaliation Under Uncertainty (War on the Rocks, 2023)
— AI versus AI, and Human Extinction as Collateral Damage (TomDispatch, 2023)
— Nuclear Doctrine Effectiveness Calculator (GitHub, 2022 - present)
— SIMC4 - Retaliatory Miscalculation Wargaming Environment (GitHub, 2020 - present)
— Strategic Risks Associated with AI-Enabled Weapons (National Security Commission on Artificial Intelligence, 2021)
— ‘Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation (Arms Control Today, 2020)
— Keep Human Control Over New Weapons (Arms Control Today, 2019)
— Ethical Control of Autonomous Systems (Naval Postgraduate School, 1993 - present)
— Nuclear Brinkmanship in AI-Enabled Warfare: a Dangerous Algorithmic Game of Chicken (War on the Rocks, 2023)
— Exploring the Impact of Automation Bias and Complacency on Individual Criminal Responsibility for War Crimes (Journal of International Criminal Justice, 2023)
— Never Give Artificial Intelligence the Nuclear Codes (The Atlantic, 2023)
— Military Artificial Intelligence as a Contributor to Global Catastrophic Risk (SSRN, 2022)
— Mending the “Broken Arrow”: Confidence-Building Measures at the AI-Nuclear Nexus (War on the Rocks, 2022)
— ‘Catalytic Nuclear War’ in the Age of Artificial Intelligence & Autonomy: Emerging Military Technology and Escalation Risk between Nuclear-Armed States (Journal of Strategic Studies, 2021)
— Inadvertent Escalation in the Age of Intelligence Machines: A New Model for Nuclear Risk in the Digital Age (EJIS, 2021)
— Deterrence with Imperfect Attribution (MIT Press, 2020)
— When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability (SSRN, 2019)
— Normal Accidents: Living with High-Risk Technologies (Perrow, 1984)
If you are interested in collaborating with DISARM:SIMC4, contact working group coordinator Jon Cefalu jon@preambleforgood.org
Currently contributing volunteers' backgrounds include: