Research Group on 'Security and Emerging Technologies' #8
The "Research Reports" are compiled by participants in research groups set up at the Japan Institute of International Affairs, and are designed to disseminate, in a timely fashion, the content of presentations made at research group meetings or analyses of current affairs. The "Research Reports" represent their authors' views. In addition to these "Research Reports," individual research groups will publish "Research Bulletins" covering the full range of the group's research themes.
Emerging Technologies and Nuclear Weapons Systems
The implications of emerging technologies have been an important issue in the debate on nuclear posture and deterrence relationship. Although the concrete objectives, concepts, plans and states of development of the nuclear-armed states regarding the introduction of emerging technologies into their nuclear weapons systems are not necessarily clear, a particular focus of discussion has been the potential impacts of introducing artificial intelligence (AI), quantum technology and other emerging technologies into intelligence, surveillance and reconnaissance (ISR) and nuclear command, control and communications (NC3).
With regard to ISR for early warning, threat detection, situational awareness and attack/damage assessment, the development of remote sensing technology through quantum sensing, for instance, could improve the ability to detect an adversary's offensive capabilities, and increase the possibility of addressing them before they are used. The use of cloud computing, ultrahigh-speed high-capacity data communications and AI is expected to enable the efficient collection and prompt analysis of vast amounts of information.
In addition, the introduction of advanced AI into the NC3 governing decision-making on the use of nuclear weapons is expected to promptly, appropriately, accurately and efficiently provide commanders with options regarding, inter alia, the pros/cons of using nuclear weapons, the scale of attack if the decision is made to use them, the selection of targets and the allocation of nuclear weapons for operations, all in a highly tense situation where nuclear weapons use is envisioned or a more complex situation where a nuclear war has already broken out. AI support in such challenging situations is also expected to reduce the possibility of unnecessary or inappropriate use of nuclear weapons due to cognitive bias, human error or false alarms. Furthermore, the practical application of quantum communications may enable reliable communication even in environments as severe as that of a nuclear war.
Emerging technologies will primarily enhance the "nervous system" capabilities of nuclear weapons systems; therefore, they would not be a game changer that could reverse the "nuclear revolution" brought about by the overwhelming explosive power of nuclear weapons. This does not mean, however, that the impact of emerging technologies on deterrence relationships would be insignificant.
Stabilizing the Deterrence Relationship?
For instance, the above-mentioned introduction of emerging technologies into nuclear weapons systems is thought to contribute to strategic stability, where the probability of a nuclear war breaking out is low, through both crisis stability, where incentives for conducting a first nuclear strike are checked, and arms race stability, where incentives for building up strategic nuclear forces quantitatively and/or qualitatively are restrained.
One of the keys to maintaining strategic stability is that a deterrer possess the capability to reliably conduct punishing retaliatory attacks (the capability for deterrence by punishment) even if it is subjected to a first nuclear strike by an adversary (or a deterree). Bolstering ISR capabilities for early detection of an adversary's attack against a deterrer's retaliatory capability would increase the likelihood of responding to such an attack before the deterrer's retaliatory capability is destroyed. In addition, by improving NC3, the deterrer can increase the certainty of conducting a retaliatory attack even when faced with excessive and complex information and time pressure. If the survivability and credibility of retaliatory capabilities are thus strengthened, the adversary could be strongly deterred from launching a first strike against the deterrer. At the same time, the deterrer would also be discouraged from attempting a first strike for fears that its retaliatory capabilities would be neutralized.
Some further argue that the construction of a system for autonomously launching retaliatory capabilities in response to a nuclear attack by an adversary, as long as it operates in a technically and politically appropriate way, could reassure the deterrer of its capability for deterrence by punishment even if the adversary attempts decapitation attacks against the deterrer's leaders with the authority to use nuclear weapons. It is also argued that the use of AI would enable more sophisticated simulations and exercises, through which the deterrer could gain confidence in its own retaliatory capabilities.
If retaliatory capabilities can be made less vulnerable through the introduction of emerging technologies, crisis stability could be maintained and improved, i.e., neither the deterrer nor the deterree would likely have any incentive to conduct a first strike. Moreover, if the invulnerability of retaliatory capabilities is ensured, arms race stability is likely to be maintained since the necessity to acquire nuclear forces beyond the scale needed for retaliatory strikes will be reduced.
Destabilizing the Deterrence Relationship?
Contrarily, it has also been argued that the introduction of emerging technologies into nuclear weapons systems will destabilize the deterrence relationship.
For example, if the deterrer uses enhanced ISR and NC3 (or C3 for conventional forces or for both nuclear and conventional forces) to equip itself with effective offensive or damage limitation capabilities that can significantly weaken or neutralize an adversary's nuclear forces, the deterrer may be tempted to launch a first strike before the adversary uses its nuclear forces. In such a situation, the adversary might also have a stronger incentive to use its nuclear capability before it is neutralized. In particular, accelerating the decision-making process with advanced AI-based NC3 will in turn accelerate the competition between the deterrer and deteree for the early use of military force.
In addition, the acquisition of an effective damage limitation capability by the deterrer may force an adversary concerned about the weakening or neutralizing of its own retaliatory capabilities to increase its retaliatory capabilities quantitatively and qualitatively. Arms race stability would be undermined if the deterrer further responded by acquiring more damage limitation capabilities, triggering a spiral of arms acquisition.
Moreover, it is not easy to dissuade a country from acquiring and strengthening such "defensive" capabilities against an adversary's attacks if they are technologically feasible. One of the essential factors that led the United States and the Soviet Union to accept a mutual assured destruction (MAD) relationship during the Cold War and pursue bilateral nuclear arms control for the purpose of strategic stability was the reality that neither could technically acquire sufficiently effective damage limitation capabilities to disable the nuclear forces of the other side.
At this point, it is not clear whether introducing emerging technologies into nuclear weapons systems will lead to the achievement of technologically advanced damage limitation capabilities. Still, it is likely that emerging technologies introduced into conventional weapons systems may be considered for introduction into nuclear weapons systems as well. Dual-purpose ISR and C3 for both nuclear and conventional forces could be developed. Even if these technologies do not contribute much to enhancing damage limitation capabilities, adversaries are likely to take into account the possibility that their retaliatory capabilities would be seriously undermined when drawing up their own nuclear strategies as well as related operational and procurement plans. It has been pointed out that such a response would lead to a decrease in strategic stability as a result.
The Risk of Inadvertent Use of Nuclear Weapons
A more important topic of discussion than the implications for strategic stability is the concern that the introduction of emerging technologies into nuclear weapons systems would increase the risk of the inadvertent use of nuclear weapons by, inter alia, misunderstanding, misidentification, miscalculation or accident.
The first argument is that, while there would be strong incentives in armed conflicts to attack nuclear or dual-use ISR and C3 systems in order to impede the military actions of an adversary, the adversary might judge an attack against dual-use systems as one on its nuclear weapon systems and respond with a nuclear retaliatory attack.
Secondly, as decision-making becomes more automated, there is a risk of automation bias, i.e., trusting and choosing options or decisions made by AI under highly stressful circumstances as "optimal" just because they were made by AI even if they are incomplete, inaccurate, or uncertain. Of course, there is also the question of the extent to which the decisions and options presented by AI can be trusted. The reliability of AI depends heavily on the quality of data, the effectiveness of training, and the parameters of operation, and it remains to be seen whether these can be incorporated at a sufficiently reliable level, especially with nuclear weapons that have not been used in warfare for over 75 years since the atomic bombings in August 1945. It is not easy to establish criteria to determine whether they can be incorporated at a reliable level and to what degree they can be trusted.
Thirdly, there is the possibility of unexpected accidental use of nuclear weapons due to the complexity and uncertainty of nuclear weapons systems incorporating emerging technologies, as well as errors in algorithms and unforeseen risks. In particular, if unreliable technologies are adopted prematurely in the competition to introduce emerging technologies, or if potential and apparent risk factors are downplayed or underestimated, the risk of an inadvertent use of nuclear weapons could be further increased.
Fourthly, deterrence relationships are defined by the interaction of perceptions between deterrer and deteree, and the incorporation of more advanced AI into NC3 would raise the additional difficulties of assessing perceptions not only between people but also between people and AI, and between AI and AI. As in the case of a multilateral deterrence relationship, AI as an additional element will make the interaction of perceptions regarding deterrence more complicated, and thus increase the possibility of misunderstandings, misperceptions and miscalculations. Furthermore, the gradual transition and development of AI-enabled NC3 would result in a mixture of old and new systems for both deterrer and deterree, which may further destabilize the interaction of perceptions between humans and machines.
Fifthly, the further digitalization of NC3 and ISR may increase vulnerability to cyberattacks. Providing disinformation, disrupting communications, and interfering or destroying communication channels will reduce the accuracy of situational awareness. Flawed or malicious code could also be introduced into nuclear weapons through the supply chain or in other ways that could undermine their effectiveness, which would result in malfunction or unauthorized control of nuclear weapons systems. Moreover, if such cyberattacks can be conducted not only in times of armed conflict but also in peacetime, the possibility of nuclear escalation from a gray zone situation needs to be considered, creating a new risk of nuclear weapons use.
It is difficult to definitively ascertain the implications of emerging technologies for the nuclear deterrence relationship, at least at present. It is clear, however, that there are serious risks such as the degradation of strategic stability and the inadvertent use of nuclear weapons, and that one of the key issues in nuclear deterrence is controlling these risks. It has been argued that various factors such as the scale and sophistication of conventional and nuclear weapons, the speed of technology introduction, geographic and geopolitical tensions, technological symmetry/asymmetry, as well as the status and maturity of strategic relationships would come into play (Boulanin et al.). In order to control and reduce the risks surrounding such complicated issues, it is imperative not only to expect the nuclear-armed states to adopt emerging technologies deliberately, but also to have countries and experts discuss this issue from various perspectives and develop norms and measures for confidence-building, transparency and risk reduction before the risks become apparent.
James M. Acton, "Escalation through Entanglement: How the Vulnerability of Command-and-Control Systems Raises the Risks of an Inadvertent Nuclear War," International Security, Vol. 43, No. 1 (Summer 2018).
Vincent Boulanin, Lora Saalman, Petr Topychkanov, Fei Su and Moa Peldán Carlsson, Artificial Intelligence, Strategic Stability and Nuclear Risk (SIPRI, June 2020).
Mark Fitzpatrick, "Artificial Intelligence and Nuclear Command and Control," Survival, Vol. 61, No. 3 (June-July 2019).
Michael C. Horowitz, Paul Scharre, and Alexander Velez-Green, "A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence," Cornell University, December 2019.
James Johnson, "The AI-cyber nexus: implications for military escalation, deterrence and strategic stability," Journal of Cyber Policy, September 2019.
James Johnson, "Delegating strategic decision-making to machines: Dr. Strangelove Redux?" Journal of Strategic Studies, (2020).
Michael T. Klare, "'Skynet' Revisited: The Dangerous Allure of Nuclear Command Automation," Arms Control Today, Vol. 50, No. 3 (April 2020).
Keir A. Lieber and Daryl G. Press, "The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence," International Security, Vol. 41, No. 4 (Spring 2017).
Kelley M. Sayler, "Artificial Intelligence and National Security," CRS Report, R45178, November 21, 2019.
Yuna Huh Wong et al., Deterrence in the Age of Thinking Machines (RAND, 2020).