Predicting Terrorism: Implications for Big Data in Public Safety

By Ben Schaefer, Columnist

Photo by:

In the recent past, the use of predictive technology in public safety seemed like a distant concept, best kept in the realm of science fiction. A 2002 film called Minority Report—hailed even today as the most prominent pop-culture representation of predictive policing—set the standard for using predictive technology far in the future. It turns out the future was closer than Minority Report’s creators imagined; similar technologies have been used for police activities in the United States since at least 2009, which offered both beneficial contributions and presented newfound challenges to the law enforcement and intelligence communities.[i]

Predictive technologies discern patters in large datasets to forecast potential future events. In the realm of counterterrorism, predictive technologies could, in theory, help prevent future attacks by analyzing patterns from previous attacks to help counterterrorism officials find things like potential hot spots for terrorist activity. These kinds of predictions are not new to counterterrorism. Intelligence agencies have been analyzing so-called “big data” for decades, bringing together disparate pieces of information for a comprehensive assessment of current threats.[ii] However, modern technology allows analysts to aggregate unprecedented amounts of data and discern trends which would otherwise be too time-consuming for human analysts to scrutinize.

In the domain of counterterrorism, predictive technologies take many forms. Twitter, one of the social media platforms that has been most utilized by terrorist recruiters, has developed an algorithm to determine which of its users could be likely to join a terrorist organization.[iii] But the use of these predictive technologies do not begin and end with Internet media platforms; financial institutions have also been using artificial intelligence to better track terrorist activities by discerning abnormalities outside the range of normal bank transactions.[iv]

Predictive technologies have also assisted law enforcement to better prepare for incidences of terrorism. Most prominent among these technologies is Dfuze, a program designed by ISS Global to help identify where terrorist attacks are most likely to occur.[v] The technology aggregates data from past incidents and helps law enforcement better predict attacks and augment security at potential “hot spots.”[vi] While this technology is not foolproof, it saves law enforcement hours of wading through countless pages of data and allows for better allocation of resources.

This efficiency comes at a cost, as predictive technologies have undeniable flaws. In crime prevention, they rely heavily on data available through criminal records and community-driven citizen reporting, which means that any bias that exists in data reporting will be replicated by these technologies.[vii] In turn, the reports from these programs may ultimately suggest law enforcement agencies invest more resources into communities that have historically higher crime rates—communities that may have been shaped by historical divides or social policies gone awry, in which increased surveillance may victimize innocent citizens. Predictive technologies for counterterrorism may do the same, though they tend to analyze a wider range of historic factors than that of predictive policing when predicting potential attacks.[viii]

In theory, more efficient allocation of law enforcement resources should help reduce crime, and, in the same vein, reduce terrorist attacks when applied to counterterrorism efforts. In practice, however, knowing where crime is likely to occur does not necessarily reduce violence. A study of the Chicago Police Department’s predictive policing program in 2016, while not specifically focused on terrorism, yielded a strong correlation between data aggregated from criminal records and predicting where crime was likely, but failed to even marginally reduce crime in the city.[ix] The fault lay with the predictive technology’s inability to discern the victims of attacks, so although police know where to better position their resources, they could not prevent a spontaneous crime against unanticipated victims.[x]

This brings up the question of whether predictive technology is an effective method for preventing terrorist attacks. While an unforeseen terrorist attack might mirror criminal activity for security agencies—much like the United Kingdom’s use of Dfuze to supplement security at the London Olympics—terrorist activity typically requires longer planning, greater financial capital, and clandestine organizational affiliations, among other factors.[xi] Planning a large-scale attack like that of 9/11 requires more sophistication than the average homicide, meaning terrorist groups could more effectively hide their activities from the algorithms that would seek to track them.

Although this question has not been definitively answered in the United States, authorities in Israel have implemented predictive programs against would-be terrorists in the West Bank, arresting those likely to commit terrorist acts and, in many cases, placing them on administrative detention status.[xii] The predictive data that led to the arrests may seem successful, as terrorist incidents decreased sharply between 2015 and 2017, but finding the metrics to accurately link arrests to a decrease in terrorism requires further assessment. [xiii]

Moreover, the efficacy of predictive technology relies on an accurate input of comprehensive data to create algorithms that facilitate future operations.[xiv] Complete datasets are rare, particularly in terrorism studies, and, even when such data is available, researchers inputting it may not capture all of the variables required for a fully accurate prognosis. These errors can reduce the effectiveness of counterterrorism efforts or can be used to harass innocent people implicated by the algorithms.

Predictive technology promises a more effective means of keeping the public safe, yet the mismatch between the ideal tools touted by science fiction and the realities of modern technology poses new and unforeseen challenges. As societies venture further into the reality of using big data and predictive software, the agencies employing these technologies must maintain an open dialogue with the communities they serve to ensure the very technology intended to keep the public safe does not put civilians in harm’s way.







[i] Walter L. Perry, Brian McInnis, Carter C. Price, Susan C. Smith, and John S. Hollywood, “Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations,” RAND (2013), 5.

[ii] Michael Landon-Murray, “Big Data and Intelligence: Applications, Human Capital, and Education,” Journal of Strategic Security 9, No. 2 (2016): 97.

[iii] “Twitter Data Mining Reveals the Origins of Support for Islamic State,” MIT Technology Review, March 23, 2015.

[iv] Rebecca Sadwick, “Your Money Helps Fight Crime: Using AI to Fight Terrorism, Trafficking and Money Laundering,” Forbes, January 9, 2018.

[v] Gary Peters, “Counterterrorism: Trying to Predict the Future,” Army Technology, September 16, 2015.

[vi] Ibid.

[vii] Kristian Lum and William Isaac, “To Predict and Serve?” Significance Magazine (October 2016), 15-16.

[viii] Peters, “Counterterrorism.”

[ix] Jessie Saunders, “Pitfalls of Predictive Policing,” RAND, October 11, 2016.

[x] Ibid.

[xi] Peters, “Counterterrorism.”

[xii] Staffan Dahlloff, Orr Hirschauge, Hagar Shezaf, Jennifer Baker, and Nikolaj Nielsen, “EU States Copy Israel’s ‘Predictive Policing,’” October 6, 2017.

[xiii] Ibid.

[xiv] Peters, “Counterterrorism.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.