Organizations develop assumptions about the future in order to make decisions. Increasingly, such assumptions are not developed “from the gut” but delegated to algorithmic prediction methods. The central promise of algorithmic prediction is to connect diverse data sources and identify meaningful patterns in the interconnected data from which decisions can be derived. Algorithmic prediction is surrounded by an aura of abstraction and neutrality, which is why it is used especially when decisions with far-reaching social consequences must be made. However, the history of the emergence of these methods often casts doubt on the notion of abstract and neutral decision support. Instead, it becomes clear that algorithms have often been developed for very specific problems and subsequently “recycled” for different kinds of problems. A look at the past of networked prediction brings to light a networked prediction.
To understand how algorithmic prediction is changing the shape of decision making in organizations, it is important to develop a sensitivity to its networked
genesis. In the COVID-19 pandemic, it became clear that there has been a lively exchange of predictive procedures for decades, particularly between the health and police sectors. Each time these procedures are transferred from one sector to another, there is also the possibility that implicit assumptions and
decision-making presuppositions will migrate along with them. In what follows, I describe three moments of this transfer. Further details on each of the
moments can be found in an open-access study on this topic (co-authored with Simon Egbert and Elena Esposito).
From epidemiology to policing
The transfer of predictive techniques from epidemiology to policing dates back to the early 20th century. As early as the 1920s, criminological research concluded that patterns of criminal activity in large cities bore distinct similarities to the patterns by which viral infections spread. From this developed the belief that crime was “contagious” and, consequently, could be predicted and “treated” preventively using procedures similar to those used in public health.
This early linkage between epidemiology and policing has more recently been translated into algorithmic prediction techniques (“predictive policing”). A particularly striking example is the Strategic Subject List, which was introduced by the Chicago Police Department back in 2013. At the core of the procedure is the assumption that violent crime spreads like a virus and that future crimes can therefore be predicted from data on past crimes. The algorithmic process calculates a risk score for people who have already been recorded by the police. If a person’s risk score exceeds the threshold, he or she is informed of this by letter or home visit. The aim of this procedure is to prevent people from committing further acts of violence and to “infect” people in their immediate environment with their own potential for violence.
From counterterrorism to public health
The example of the Strategic Subject List shows how predictive techniques from epidemiology have been transferred to the field of policing. In the course of the COVID-19 pandemic, another transfer in the opposite direction became apparent. This time, complex software systems and algorithmic procedures originally developed for counterterrorism and intelligence use were transferred to the public health arena. One of the most important providers of such predictive software and the driving force behind this networking is the US company Palantir. Palantir’s software allows users to link together highly heterogeneous and unstructured data sources to discover new patterns in that data. In the course of the pandemic, Palantir offered its software services not only to actors from the security sector, but also for the first time to government actors from the health sector. In March 2020, for example, the UK government announced a collaboration with Palantir to better integrate and analyze pandemic-related data from the publicly funded healthcare system (NHS). It is neither surprising nor questionable that in situations of urgent health threats, governments seek to centralize and connect information streams to make better and faster decisions (or at least signal the ability to do so). Still, the U.K. government’s decision not to develop its own procedures but to rely on a vendor like Palantir raises questions. Does the transfer of technological infrastructure between counterterrorism and public health also infect the latter with the assumptions, values, and procedures of the former?
From self-reliance to surveillance
The question of whether governments should partner with companies like Palantir is a matter of public debate. Ultimately, however, governments – especially in crisis situations – are able to rise above such public controversy quite unscathed. However, concerns about government surveillance and control have a much more immediate impact on pandemic response measures that rely on the cooperation and personal responsibility of citizens. Because manually tracing contacts between infected individuals and others is extremely costly, many countries have developed apps to make tracing faster, cheaper, and more accurate. In most countries, however, these apps have proven to be a flop – from the perspective of government pandemic management – because too few citizens have downloaded the apps or used them consistently.
One of the reasons for this reluctance seems to be concerns about too close a link between medical and police use of the data collected. In the German context in particular, data protection issues were discussed extensively and controversially during the development of the Corona warning app. Unlike in the case of the British cooperation with Palantir, the controversy surrounding the Corona warning app led to concrete adjustments in the development process of the application. This example thus shows that not only the concrete, but also the potential interconnectedness of algorithmic prediction processes can have an impact on their shape and use.
Past and future of prediction
The past of algorithmic prediction reveals its troubled history between public health and public safety. What can be learned from the past of prediction for its future? By looking at the interconnectedness of prediction, we can develop a better understanding of the “hidden” decision-making premises of these processes. This comparative process seems particularly important in situations where algorithmic prediction influences decisions with far-reaching social consequences, but where a direct view into the process of developing and applying these systems is not possible (Strategic Subject List and Palantir) or is limited (Corona warning app).
Dr. Maximilian Heimstädt is an Akademischer Oberrat at Bielefeld University, where he conducts research in the EU-funded project “The Future of Prediction: The Social Consequences of Algorithmic Forecast” (ERC Advanced Research Project No.
833749). He is also head of the research group “Reorganizing Knowledge Practices” at the Weizenbaum Institute for the Networked Society in Berlin. From 2016 to 2020, he was a research associate at the Reinhard Mohn Institute for Management (RMI) at Witten/Herdecke University. More info here: heimstaedt.org
We definitely want to keep Maximilian Heimstädt’s innovative and cross-disciplinary thinking – until recently at UW/H – active in the WITTEN LAB network beyond this contribution and stay in touch with him at his new places of work, the Weizenbaum Institute and Bielefeld University.