Algorithmic Governance is an interdisciplinary research network dedicated to understanding the social implications of artificial intelligence, big data, platform capitalism, datafied knowledge, and automated decision-making on society.
‘Algorithmic governance has many faces: it is seen as ordering, regulation and behaviour modification, as a form of management, of optimisation and of participation. Depending on the research area it is characterised by inscrutability, the inscription of values and interests, by efficiency and effectiveness, by power asymmetry, by social inclusiveness, new exclusions, competition, responsiveness, participation, co-creation and overload. For most observers, governance becomes more powerful, intrusive and pervasive with algorithmization and datafication’ (Katzenbach & Ulbricht 2019: 11).
Grounded in interdisciplinary perspectives, we aim to develop a theoretical framework that will enable us to understand the emerging relations of power, governance, organization, management and ordering of social life in light of algorithmization and datafication. Technology is not neutral, it is political.
Intelligent surveillance, predictive policing, border control and biometric technologies are all increasingly powered by big data analytics and artificial intelligence. This raises not only questions of bias, ethics, and the right to privacy, but also questions of justice, new forms of algorithmic harm, 'technological redlining', reinforcement of inequality and prejudice. We aim to explore these issues in depth and offer alternative readings grounded in ultra-realist and zemiological perspectives.
work & labour rights
Algorithms are increasingly deciding over our working lives – from platform capitalism to workplace surveillance in the Amazon warehouse – we are witnessing a rise of automated decision-making that is reshaping not only organizations, HR and management, and ways of working, but also relations of trust, possibilities for workers’ participation and co-determination, and ultimately worker’s rights. We aim to reveal the often invisible ways in which algorithmic governance is reshaping our working lives.
democracy & Justice
Black-box algorithms, opaque decision-making systems, predictive big data analytics and ICT and smart surveillance systems are increasingly used in areas ranging from the criminal justice system, welfare governance to moderation of hate speech and disinformation. But their inherent flaws and biases may result in a democratic deficit, infringement on human rights and privacy, freedom of speech, and new forms of harm and injustice. How does algorithmic governance challenge democratic oversight and what solutions can be proposed?
DATafied knowledge & beyond
Data is marketed as neutral, as pure – as the ‘truth’. Results of data-driven predictions, of opaque algorithmic decisions, are labelled as ‘evidence’ and ‘intelligence’ – as something ‘solid’ and hard in ‘liquid times’, as something to reliably build our lives, organizations and governance on. Yet we know that raw and pure data is an oxymoron. Data are always already dirty and political, reflecting the interests of those who build these systems; purity is an illusion, and so is neutrality. We explore the societal consequences of this blind faith in datafied knowledge. But we also ask: what defies datafication and can opacity – in a world of transparency and control – have a subversive potential?