Algorithmic Governance and the Possible Futures of Intelligent Surveillance After COVID-19
By Tereza Kuldova
From the Conference: Imagining The Post-Coronavirus World organized by Auro University, Surat, India
Watch the lecture here:
Algorithmic Governance and the Possible Futures of Intelligent Surveillance in the Aftermath of the COVID-19 Crisis
OsloMet – Oslo Metropolitan University
In response to the global COVID-19 pandemic, governments across the globe are – in sudden admiration of the Chinese model – developing new intelligent surveillance technologies in close collaboration with private tech companies. Contact tracing and quarantine surveillance apps have rapidly progressed from being voluntary to becoming mandatory, such as in Poland, backed by the extended powers of law enforcement, and already causing considerable stress, anxiety and mental health issues (Bartoszko 2020). Palantir, Peter Thiel’s secretive CIA-backed data-mining company, notorious for its predictive policing software, is now providing the NHS in UK with COVID-19 data analysis, which, as the UK government remarked will enable “disparate data to be integrated, cleaned, and harmonised in order to develop the single source of truth that will support decision-making” (emphasis mine). Big data analytics and ‘hard data’ have been sold as the unmediated, objective, and neutral truth for a while now, despite the fact that raw data is an oxymoron (Gitelman 2013) – context, bias, and relations of power are erased with profound consequences, pushing the ways in which we are increasingly governed by code – that we largely neither understand nor have been part of making – into invisibility. In the current crisis, new algorithmic modes of combatting fake news and untruth are proliferating, and some governments – e.g. South Africa – are criminalizing untruth, with obvious consequences for the freedom of speech and thought. Despite the vast amount of both academic and activist literature that points to algorithmic injustices and harm, to the detrimental and even fatal consequences of data-driven predictions and opaque artificial intelligence systems built on ‘dirty data’, we are told that we should rely on data and base the governance of our societies on it, lending even more political power to big tech (Kuldova 2020). Algorithmic and technocratic governance, driven by rapid spread of new intelligent surveillance technologies in all areas of our lives is already intensifying during this crisis, but will most likely also outlast it. Increased surveillance is now presented by many governments as a trade-off for gradual opening up of society and mitigation of economic consequences – in other words, either we submit ourselves and our private health, location and other data to a regime of intrusive surveillance, or we face endless self-isolation and dire economic consequences. Surveillance will be the price to pay for a relative freedom of movement, possibly revoked at any point by an automated algorithmic decision. The question is: are we not risking sacrificing health and security itself precisely in the name of health and security – and with them our privacy, fundamental rights and liberties, principles such as due process and presumption of innocence (Benjamin 2019; O’Neil 2016) and democracy itself as code becomes the law (Susskind 2018)? And what will be the lasting consequences?
Bartoszko, A. 2020. ‘Accelerating Curve of Anxiousness: How a Governmental Quarantine-App Feeds Society with Bugs.’ Journal of Extreme Anthropology 4 (1): E7-E17.
Benjamin, R. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity Press.
Gitelman, L., ed. 2013. “Raw Data” Is an Oxymoron. Cambridge, Massachusetts: MIT Press.
Kuldova, T. 2020. ‘Imposter Paranoia in the Age of Intelligent Surveillance: Policing Outlaws, Borders and Undercover Agents.’ Journal of Extreme Anthropology 4 (1): 45-73.
O’Neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Susskind, J. 2018. Future Politics: Living Together in a World Transformed by Tech. Oxford: Oxford University Press.
Tereza Kuldova, PhD is a social anthropologist and Senior Researcher at the Work Research Institute, Oslo Metropolitan University. She is the author of, among others, How Outlaws Win Friends and Influence People (Palgrave, 2019), Luxury Indian Fashion: A Social Critique (Bloomsbury, 2016), editor of Crime, Harm and Consumerism (Routledge, 2020), Outlaw Motorcycle Clubs and Street Gangs: Scheming Legality, Resisting Criminalization (Palgrave, 2018), Urban Utopias: Excess and Expulsion in Neoliberal South Asia (Palgrave, 2017), and Fashion India: Spectacular Capitalism (Akademika Publishing, 2013). She has written extensively on topics ranging from populism, organized crime, nationalism, philanthrocapitalism, legitimacy, sovereignty, fashion, design, aesthetics, branding, intellectual property rights, philanthropy, India, to outlaw motorcycle clubs, subcultures, and anti-establishment resentment. She is currently working on the topics of algorithmic governance, surveillance, and artificial intelligence in policing and the welfare state. She is the founder and editor-in-chief of the Journal of Extreme Anthropology. For more information, please visit: www.tereza-kuldova.com.