The Need for A Federal AI Supervisory Task Force is More Urgent than Ever

A stylized image representing AI. Photo Credit: Flikr/Mike MacKenzie.

The US government must establish a special taskforce that tests and sanctions the release of diverse artificial intelligence (AI) products rolled out by Silicon Valley giants to reduce national security or physical security risks of new AI technologies.

The potentially damaging effect of AI on society exceeds any other disruptive innovations in the past. The diverse nature and self-learning ability are unique to AI technologies.

  • Due to the diverse sub-fields of AI, such as computer vision, language processing, and robotics, a singular policy cannot mitigate the risks of all AI products. The threats of killer robots to national security, for example, are different from those of deepfakes. The former can cause mass casualties while the latter can support disinformation campaigns.
  • The ability of many AI technologies—such as machine learning (ML)—to learn and perform actions on their own could blindside humans in their decision-making process. Sometimes even AI engineers cannot understand how the machine they have created makes decisions based on human-fed data and algorithms.[i] The risks posed by the obscure nature of the AI technology—what AI experts call the “black box” problem—could be mitigated through constant testing and monitoring to minimize the amount of threats unknown to its creators.[ii]

Currently, no supervision mechanism exists at the federal level to measure the performance and trustworthiness of AI products.[iii] As a result, it is up for companies themselves to determine the appropriate method of release. Some companies have successfully rolled out products with guidelines from in-house policy experts and private partnerships.

In the case of GPT-2, a ML-powered text generator that can write news stories or works of fiction based on fed text, [iv] its creator—a non-profit research company called Open AI—declined to release the full research to the public for fear of national security concerns.[v] GPT-2 could exacerbate the already concerning problem of fake news, and its self-learning ability may lead to fabricated information that its creators could not control. Upon consulting with Partnership on AI, the private technology industry consortium Open AI chose to gradually introduce the technology by releasing it incrementally: a small version in February, then a more powerful version in May, all while continuing to test the machine.[vi]

While Open AI gradually releasing the GPT-2 model mitigates the AI risks to some extent, the government must step in and test the technology before its release. The private sector sees profits as the ultimate goal. Corporate decision-makers and shareholders could make decisions based on profits at the expense of public safety. Moreover, private companies—without the assistant of government—may not have the proper resources to forecast the secondary effects on which their AI products may have on other sectors, such as national security and healthcare.

In August 2019, the National Institute of Standards and Technology (NIST) released guidance on how the US government should engage in developing a standard that places protections on AI technology, especially those that could have significant negative effects on society if were to be released without proper testing.[vii] While this is the first step towards addressing this issue, such guidance neglects the need for a legally binding mechanism that would force the issue of ethical and security-conscious AI.

The proposed taskforce could use a similar framework as the US Food and Drug Administration (FDA) employs to clear new drug products. In 1906, Congress passed the Pure Food and Drug Act that established the FDA. The Act aimed to regulate the pharmaceutical industry that was abusing the lack of legislation to add chemical additives in food.[viii] FDA developed a supervisory mechanism over time to test and approve new drug and food products as well as protect public health and security. A similar framework that tests and approves the release of new AI technologies can apply to the AI industry.

The taskforce could raise concerns that placing guardrails on new AI technologies would restrict the research and development (R&D) efforts of private companies in the U.S. and therefore place the U.S. at a disadvantage in the AI race with China. While China has received extreme scrutiny and criticism over its use of AI to monitor citizens, especially the Uighur minority,[ix] it is open to discussions about how it uses the technology.

Beijing Academy of Artificial Intelligence backed by the Chinese Ministry of Science and Technology released the Beijing AI Principles in May 2019 in which it laid out guidelines for AI R&D with consideration of “people’s privacy, dignity, freedom, independence and rights.”[x] Although one could dismiss such effort as disingenuous considering its current practice of AI, the document shows that Beijing is willing to discuss such issues. The U.S. government could use such an opportunity to discuss a legally binding standard with China and further promotes international cooperation on ethical and social norm development for AI technologies.


Bibliography

[i] Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

[ii] Ariel Bleicher, “Demystifying the Black Box That is AI,” Scientific American, August 9, 2017, https://www.scientificamerican.com/article/demystifying-the-black-box-that-is-ai/”

[iii] Arthur Allen, “There’s a reason we don’t know much about AI,” Politico, September, 15, 2009, https://www.politico.com/agenda/story/2019/09/16/artificial-intelligence-study-data-000956.

[iv] Alex Hern, “New AI fake text generator may be too dangerous to release, say creators ,” The Guardian, February 14, 2019, https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction.

[v] OpenAI, “Better Language Models and Their Implications,” OpenAI, February 14, 2019, https://openai.com/blog/better-language-models/.

[vi] Jack Clark, “Written Testimony of Jack Clark Policy Director OpenAI Hearing On ‘The National Security Challenges of Artificial Intelligence, Manipulated Media, and “Deep Fakes”’ Before The House Permanent Select Committee on Intelligence,” U.S. House of Representatives, June 13, 2019, https://docs.house.gov/meetings/IG/IG00/20190613/109620/HHRG-116-IG00-Wstate-ClarkJ-20190613.pdf.

[viii] Ben Panko, “Where Did the FDA Come From, And What Does It Do?” Smithsonian.com, February 8, 2017, https://www.smithsonianmag.com/science-nature/origins-FDA-what-does-it-do-180962054/.

[ix] Paul Mozur, “One month, 500,000 Face Scans: How China is Using A.I. to Profile a Minority,” The New York Times, April 14, 2019, https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.

[x] ZX, “Beijing publishes AI ethical standards, calls for int’l cooperation,” XinHuaNet, May 26, 2019, http://www.xinhuanet.com/english/2019-05/26/c_138091724.htm.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.