Does the Future of Artificial Intelligence Favor Authoritarianism?

Photo Credit: Seeedstudio.com

In our networked world, artificial intelligence (AI) methodologies can be copied and pasted with minimal modification until they become geopolitical forces. On April 8, 2021, the US Office of the Director of National Intelligence (ODNI) released its Global Trends 2040 report, noting more centrally than ever that AI is not only an isolated technology but also an increasingly indivisible character of the global system due to its potential to disrupt every aspect of civilization by processing and analyzing data faster and more imaginatively than the human mind. The report summarizes new trends in AI, such as converging seemingly unrelated technologies, accelerating development timelines, and compounding first adopter advantages particularly in standards setting.[i] Yet, the report concurrently highlights dark undertones of these seemingly neutral trends, such as the increasing inequality and raw power imbalance between knowledgeable states who create technology versus those who use and amplify said technological capabilities. The ODNI is concerned that the diffusion of digital technologies and their rapid ‘copy-paste’ proliferation means that more authoritarian states will have access to and take advantage of this new domain, giving them the capacity to shape the development of the field and the norms and structures related to it in their image.[ii] This begs the question: are these AI technologies truly neutral forces?

Political powers play a role in determining the direction of accelerating and disruptive technologies as technology and technological capacity are increasingly recognized as geopolitical power. [iii] This is particularly concerning as authoritarian regimes employ AI and possess distinct advantages over non-authoritarian governments. Firstly, not only can they coerce or self-select individuals into government service to generate strong technologically-savvy talent pools, but they can also use AI to centralize and amass data without having to consider political restraints and/or individual liberties.[iv] As such, authoritarian systems can deploy AI at the national level with little consent of their population or intra-bureaucratic wrangling. Secondly, authoritarian systems will be able to rapidly advance existing technologies, as scientists and technicians integrate capabilities to disrupt competitors by centralizing financial resources and statecraft behind state-backed enterprises. Thirdly, because AI exists in a world where it is first deployed and then evaluated, meaning that consequences and standards setting can become secondary to development, “corporations and states at the forefront of emerging technology will shrink timelines and choose to develop technologies before the implications of these choices are fully known.”[v] Authoritarian economies closely managed by the state may be better prepared to react quickly and fund new and risky technologies, while open economies rely heavily on private sector funding for innovation and are vulnerable to planned economies who are able to buy, steal, or develop any successful AI methodology. Lastly, the basic building blocks of operating systems require logs that record every process within a computer, inherently generating an almost perfect surveillance opportunity for authoritarians wishing to crush dissent. [vi]

Even assuming that authoritarian regimes produce fewer innovators in AI, as quantity and lack of constraints can sometimes overcome quality, what advantage is openness if it enables the instant theft and deployment of new AI models by authoritarian states? Open societies will not be able to use innovation to compete with these authoritarian coercion tactics or AI data reservoir advantages, as adversarial learning allows authoritarian systems to leverage state-backed cyber operations that will increasingly be automated to identify, collect, and iterate technologies that instantly gain access to systems, collect intellectual property, and secure data in automated ways that advance state power.[vii] As a result of these power imbalances, could AI development favor authoritarianism?

US policymakers must question the intent and consequences of AI technologies with a critical eye in order to protect US national security. In an AI-enabled future, labor and societal forces will be disrupted by transformational change that will move faster than human minds and traditions can tolerate, and democratic governance must work to navigate these accelerating currents. The ODNI report notes that “although many countries will develop strict rules on the use of personal data, there will be debate on whether these rules can coexist with the full realization of AI capabilities.” As is visible today, when personal data protection is limited, states and corporations work to bypass controls with a push-pull of data collection, sophisticated spyware, and enhanced decryption techniques to enhance their data reservoirs.[viii] US policymakers must look inward and develop strategic objectives before applying AI results to ensure that use cases serve collective goals. Leaving AI development to authoritarian competitors is an unacceptable abdication. Instead, agile prototyping, funding, and scaling of national security innovation by organizations like the Defense Innovation Unit (DIU) is desperately needed. [ix] AI will be vital to securing future national security interests and handling the complexity of future systems; but, AI for national security must remain lean—able to continue to pivot and innovate at every stage of development, self-evaluate, and evolve in real time while deployed.

Moreover, AI technologies–statistical, rules-based systems unlike the natural world they model–advance and become more complex, they become relatively more dynamic but conversely less human intelligible; fewer individuals understand end-to-end what data and data pipelines rely on to operate.[x]  Mathematical complexity can thus obscure policymakers and citizens from the bias and inaccuracy of AI while limiting accountability and enabling only large, hierarchical organizations to foster cutting-edge development. Because AI that works in a corporate setting may create unacceptable risks and inaccuracies in the context of national security, the U.S. must educate and attract AI expertise to public service.[xi] 

Overall, policies must not be shaped by the needs of the AI training sets; AI exists as a human tool, and loss of human-centrism risks further “exacerbating inequalities within and between states.”[xii] As such, democratic states should apply AI’s potential to take advantage of weaknesses in authoritarian systems by exposing corruption, breaking through propaganda narratives, and delivering better outcomes for their citizens. It is critical that AI development becomes agile and adaptable to the needs of individual teams and missions within the government instead of shaping and being shaped by organizations that serve what best suits the algorithm.

Bibliography

[i] “Global Trends 2040: A More Contested World,” Office of the Director of National Intelligence, April 8, 2021, https://www.dni.gov/files/ODNI/documents/assessments/GlobalTrends_2040.pdf.

[ii] Maya Wang, “China’s Techno-Authoritarianism Has Gone Global,” Foreign Affairs, April 8, 2021, https://www.foreignaffairs.com/articles/china/2021-04-08/chinas-techno-authoritarianism-has-gone-global.

[iii] Ibid, 64.

[iv] Theophano Mitsa, “How Do You Know You Have Enough Training Data?” Towards Data Science, April 22, 2019. https://towardsdatascience.com/how-do-you-know-you-have-enough-training-data-ad9b1fd679ee.

[v] “Global Trends 2040: A More Contested World,” Office of the Director of National Intelligence, April 8, 2021, https://www.dni.gov/files/ODNI/documents/assessments/GlobalTrends_2040.pdf, pg. 57.

[vi] “Highly Evasive Attacker Leverages SolarWinds Supply Chain to Compromise Multiple Global Victims With SUNBURST Backdoor” Fireeye, December 13, 2020, https://www.fireeye.com/blog/threat-research/2020/12/evasive-attacker-leverages-solarwinds-supply-chain-compromises-with-sunburst-backdoor.html.

[vii] Chaohui Yu et. al., “Transfer Learning with Dynamic Adversarial Adaptation Network,” International Conference on Data Mining, May 2019, http://jd92.wang/assets/files/a16_icdm19.pdf.

[viii] “Terms of Service Didn’t Read,” March 2021, https://tosdr.org/en/api.

[ix] “Artificial Intelligence,” Defense Innovation Unit, https://www.diu.mil/solutions/portfolio#ArtificialIntelligence.

[x] “Artificial Intelligence System Tops One Billion Neurons on a Desktop Computer,” August 6, 2020, PRNewswire, https://www.prnewswire.com/news-releases/artificial-intelligence-system-tops-one-billion-neurons-on-a-desktop-computer-301106973.html.

[xi] Rep. Robin Kelly and Rep. Will Hurd, “Artificial Intelligence and National Security,” Center for Security and Emerging Technology, July 30, 2020 https://cset.georgetown.edu/research/artificial-intelligence-and-national-security/.

[xii]Global Trends 2040: A More Contested World,” Office of the Director of National Intelligence, April 8, 2021, https://www.dni.gov/files/ODNI/documents/assessments/GlobalTrends_2040.pdf.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.