REPORT: Technology and the Future of National Security

SFS Dean Joel Hellman, Professor Toni Gidwani (SFS’04, SSP’08) with Palmer Luckey, Founder of Anduril and Oculus VR, and Trae Stephens (SFS’06), Chairman of Anduril. Photo credit: Dakota Cary


By: Dakota Cary, Reporter

“Do you have an extra billionaire lying around?” asked one student at the Technology and the Future of National Security event on Monday, January 28 by the Center for Security Studies. The question, however trite it might seem, was not an unreasonable one to direct at the technology panel hosted by Professor Toni Gidwani (SFS’04, SSP’08) with Palmer Luckey, Founder of Anduril and Oculus VR, and Trae Stephens (SFS’06), Chairman of Anduril.

The need to secure capital in the entrepreneurial “wild west” of defense-related developing technologies was by-far the most prevalent theme of the event. Indeed, it occasionally felt more like a panel for entrepreneurs than national security professionals. Your correspondent nevertheless gleaned a few underlying points that are relevant for national security.

  1. As technologies develop, the U.S. government must continuously focus on the ethical decisions underlying smart-weapons and AI technology.
  2. The existing investment model of the U.S. government does not yield deployable technologies.
  3. Technology acquisition by the U.S. government has shifted primarily to adapting consumer-based technologies for national security needs.

The panelists emphasized the changes that have taken place over the last 40 years. Whereas the government previously developed technologies and then licensed them for commercial use (think the internet, GPS, etc.), the process is now reversed. Multinational corporations like Google are at the forefront of AI research because of their access to troves of consumer data. The government, lacking this advantage, lags far behind. Luckey and Stephens view this as a market opportunity. By focusing on “no new physics” and leaving capital intensive research to universities and defense primaries (read, taxpayers), the panelists concentrate their business on merging existing technologies to create new products. Their goal? Make products only five-months in advance of existing technologies, keep production in-house, and listen to the government’s needs.

The panelists argued that the current investment strategy of the Department of Defense (DoD) is to write many small ($50,000) program checks to technology start-ups, of which only 10% will bear fruit. Stephens specifically noted that graduating from a DoD pilot program to a project receiving recurring funding is extremely rare—this often dissuades private capital from investing in a company, leading to wasted capital. When asked whether other states’ models could be incorporated into DoD’s investment strategy, the panelists were quick to dismiss direct comparisons to other countries given their varying degrees of respect for intellectual property. The panelists did, however, acknowledge that a more concentrated investment strategy (either with larger checks or recurring revenue streams) might yield better results by raising the deployable-technologies-per-investment ratio. Moreover, they felt this approach might allow for more competition among market participants because it would permit small companies to overcome high capital barriers.

The panelists also directly tied the discussion of DoD investment strategy to the ethics of national security technologies and their development. The bottom line was: If we don’t develop the technologies now, and if we don’t have the conversation about AI ethics, human-in-the-loop responsibility, and machine decision-making, then we will have to figure it out on the fly; that’s how we make mistakes. As one panelist noted, our near-peer adversaries are unlikely to be concerned with delegating responsibility to one individual for the actions of an AI-based weapons system that makes an error (however egregious) during combat—the US would not accept such a risk-tolerant legal framework.

The future of technology and security seems to focus on the application of consumer technologies, the haphazard approach to acquiring deployable technologies, the rapid advancements of AI, and the ethical underpinnings of states engaged in conflict. As for SSP students interested in creating the future technologies of national security, take the advice of our colleague—find a billionaire.

 

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.