Column / Report / Other Papers

[Research Reports] The US Strategy for Emerging Technologies and the Issue of "Consensus" Building

07-08-2021
Kousuke Saitou (Associate Professor, Sophia University)
  • twitter
  • Facebook

The Research Group on 'Security and Emerging Technologies' #6

"Research Reports" are compiled by participants in research groups set up at the Japan Institute of International Affairs, and are designed to disseminate, in a timely fashion, the content of presentations made at research group meetings or analyses of current affairs. The "Research Reports" represent their authors' views. In addition to these "Research Reports," individual research groups will publish "Research Bulletins" covering the full range of the group's research themes.

Introduction

In October 2020, the United States government issued the "National Strategy for Critical and Emerging Technologies" (C&ET strategy).1 It affirmed the importance of promoting new science, technology, and innovation initiatives in order for the US to maintain its economic and security competitiveness. The strategy consists of two pillars: (1) fostering a "National Security Innovation Base" involving the private sector, and (2) promoting technology protection, including the prevention of illicit technology theft and investment controls.

Although the C&ET strategy was mostly intended to reaffirm the Trump administration's policy to promote innovation in the security field, it is also notable for prioritizing certain technology areas. It showed the three categories of technology management where the US pursues global leadership in its most important technological areas, while working with allies and friends to develop and protect technologies in areas of relatively high priority, and focusing on risk management in other areas.

One of the key points in technology management is finding a way for domestic and international actors to agree on such prioritization. In fact, the issues that emerge are not always uniform, but depend on the nature of the technology, the level of social implementation, and the interests of the private sector. From this perspective, this article takes 5G and artificial intelligence (AI) as examples, and considers the diversity of issues arising in the management of emerging technologies.

Issue of Building an International Consensus Regarding 5G Risk Management Policies

Although it is debatable whether 5G is included in the "emerging technology" category, it is a model case in understanding international risk management, especially supply chain risk, and current technological security.

The fundamental concern of 5G technology is that companies such as Huawei and ZTE have invested heavily in developing 5G technology and acquiring related companies, and have become leaders in patent acquisition and standardization. It is natural for companies to make strategic investments in technology and make acquisitions for business reasons. It is not appropriate to restrain such activities from the standpoint of supporting economic liberalism. From a security perspective, however, it has been regarded as a major risk management problem that such companies, under the policy of "military-civil fusion," can take actions that threaten the security of other countries, taking into consideration the will of the Chinese government, through enormous subsidies and legal backing.

Such issues regarding information and communications are not limited to the US, but are also required to be addressed in relation to a wide range of allies and friends. From a military point of view, the introduction of 5G is expected to enable the implementation of high-speed communications between allied countries, but there are concerns that such a move could create risks of espionage and cyberattacks.

The US has been excluding or exercising its influence against Chinese threats regarding 5G technologies through restrictions on government procurement and investment regulation through the Committee on Foreign Investment in the US, while demanding its allies and friends not use Huawei's 5G products. However, their responses are not always uniform.

On the one hand, there have been actions that fell in line with US intentions, such as Australia and New Zealand banning Huawei and ZTE from entering the 5G business. On the other hand, European countries' initial responses were reluctant and uneven compared with the US, but are now moving toward stricter regulations. Furthermore, following the Chinese promotion of the spread of its 5G system and other telecommunications infrastructure through the "Digital Silk Road" initiative, many developing countries have introduced inexpensive Huawei 5G products in order to promote economic and social development, even with the security risks.2

These 5G cases reaffirmed the importance of strengthening technology management in the US-China conflict. However, the reality is that the level of regulations differs among allies and friends due to differences in the urgency of threats, economic relations, and the social effects of each country, although it is necessary to take cooperative actions in the adoption and regulation of such leading-edge technologies.

Normative Conflict in the US Domestic AI Ecosystem

Although the AI field faces many international issues such as the global expansion of R&D structure and the semiconductor supply chain, it is also notable that domestic coordination is required in terms of norms and ethics concerning future uses.

The AI strategy issued by the Department of Defense in 2018 showed what its approach to AI use was supposed to be across the entire Department, from battlefield to routine operations, including "operations, training, sustainment, force protection, recruiting, healthcare, and many others."3 The strategy also stated that the government will promote the development of AI-related human resources and strengthen cooperation with private companies, academia, allies and friends, while working to ensure ethical standards and safety based on the policy that AI should be kept under human control.

The use of AI, especially the problem of its autonomy, has always been the focus of defense policy. Long before the announcement of the AI strategy, the Department of Defense directive issued in 2012 had already emphasized human judgment and intervention in the autonomy of weapons systems, being reflected in the operational guidelines for unmanned systems.4 This directive was also referred to in the 2018 AI Strategy as a reference point for one of the ethical standards.

Behind the need for such guidelines is the growing importance of consensus building regarding usage policies and ethical standards, as the development and use of AI in the security field involves a variety of stakeholders, including in the private sector. In 2020, the Department of Defense announced the five ethical principles for AI, which set out the following policies: (1) responsibility for the development, deployment, and use of AI; (2) equitability to minimize unintended bias in AI capability; (3) traceable processes of AI development and operation; (4) reliability, safety, security, and effectiveness; and (5) being governable through human intervention to detect and avoid unintended consequences.5

An important point to note is that these principles were announced based on the recommendations of the Defense Innovation Board as a result of "15 months of consultation with leading AI experts in commercial industry, government, academia, and the American public, which resulted in a rigorous process of feedback and analysis among the nation's leading AI experts with multiple venues for public input and comment." As mentioned earlier, the US AI development for defense has been based on a system that includes various private sectors. A public-private partnership through the Defense Innovation Unit is a notable case. At the same time, as diverse actors participate in the AI ecosystem, not only various ideas but also criticisms arise, as Google's case shows. In response to objections within the company, Google decided to drop participation in the MAVEN project and withdraw from bidding on the Joint Enterprise Defense Infrastructure project, later announcing that it will not pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.6

Conclusion

Since the Obama administration, the US has accelerated its open innovation policy for defense purposes to involve the private sector, leading it to take into account a variety of interests and norms in managing its ecosystem at both the domestic and international levels. In other words, the management of emerging technologies can succeed only by coordinating these diverse values and building a consensus.

However, the examples of 5G and AI indicate that political challenges appear in a variety of ways depending on the inherent characteristics of each technology domain and the underlying ecosystem. This means it is necessary not only to establish a policy for the development and regulation of emerging technologies in general, but also to make customized approaches for specific domains.

In any case, the classification shown in the C&ET strategy reflects the US's interests, not necessarily corresponding to the ones of its allies and friends, which have inherent political, economic, and technological conditions related to the ecosystems built in each country. From this perspective, it will be necessary to consider how to put the "policy in general" on the management of emerging technologies into a "planning in detail" that combines the unique characteristics of the technology field with the circumstances of each country.




1 The White House, National Strategy for Critical and Emerging Technologies, October 2020.
2 Kei Koga, "Japan-Southeast Asia Relations: The Emerging Indo-Pacific Era," Comparative Connections, vol. 21, no. 1, May 2019, pp. 125-134, http://cc.pacforum.org/wp-content/uploads/2019/05/1901.pdf.
3 Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity, 2018, p. 5.
4 Department of Defense Directive 3000.09, "Autonomy in Weapon Systems," November 21, 2012; Department of Defense, Unmanned Systems Integrated Roadmap FY 2013-2038, pp. 15, 81, 66-73.
5 Department of Defense, "DOD Adopts Ethical Principles for Artificial Intelligence," February 24, 2020, https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.
6 "AI at Google: Our Principles," Google Official Blog, June 7, 2018, https://www.blog.google/technology/ai/ai-principles/. Google participated in the cloud solution project of the Defense Innovation Unit later in May 2020.