top of page

Brief: RegTech, Compliance and the Pitfalls of the AI Act by Tereza Østbø Kuldova

Updated: Nov 17, 2023

Speech delivered at LO Sweden, at the conference Led by Machines organized by FEPS, for more see: https://feps-europe.eu/event/led-by-machines/


You can read the speech below or you can watch the video on YouTube. Please refer to this blog post if reusing the text.


RegTech, Compliance and the Pitfalls of the AI Act

Tereza Østbø Kuldova, Oslo Metropolitan University, Norway


Dear all,


It is a great pleasure to be invited to offer opening remarks for this important panel on Regulating platforms and AI: what does the EU do to stand up to Big Tech? While our panelists will delve into several concrete issues pertaining to the proposed regulation, I will offer general remarks on what I see as some of the fundamental challenges in standing up to Big Tech and protecting fundamental rights at work – privacy, collective rights, equality, and human dignity.


I want us to take a step back and reflect on the ways in which we are being governed today and the ways in which we seek to prevent and pre-empt harms and ensure protection of rights: from the level of EU directives, via national legislation, or collective agreements to the corporate-style governance in both private and public sector. Looking closer at the nature of this governance and regulation, we realize that it is driven by the logic of techno-bureaucratic, formalistic, and legalistic compliance, which is organized around the imperatives of risk-based management, auditing, impact assessments, due diligence, standards, streamlining of organizational and technical processes, transparency, and disclosures, and so on.


This governance logic itself is now being turned into a software product on the market by so-called RegTech or regulatory technology companies. RegTechs promise companies to automate their compliance with complex regulations – be it GDPR, anti-money laundering, countering of terrorism financing, supply chain due diligence, the Whistleblowing Directive, or ESG reporting. And they will offer precisely such products for compliance with the AI Act in the future. These RegTech software products promise to prevent anything from fraud, insider trading, corruption, security breaches, to sexual harassment, bullying and microaggressions, through an ever more detailed surveillance and monitoring of workers, suppliers, and clients alike – much of this tech falls within the high-risk category. Regulatory compliance itself has thus become a massive market for auditing and consulting companies such as the Big Four audit firms as much as for Big Tech, and startups funded by venture capital. The same tools used for the purpose of regulatory compliance are routinely leveraged to surveil workers, assess their performance, exert control, and sanction.


In the current geopolitical context, with the war in Ukraine and threats posed by authoritarian regimes such as China, the heightened concerns about national security drive the security industry ever closer to RegTech as security features become integrated into regulatory compliance solutions; this security is again imagined to be achieved through pre-emptive surveillance. It will be therefore even more difficult in the future to argue against security and for workers’ rights as the security interests of the nation states will come into conflict with worker’s rights and protections.


In this context of commodification and capture of regulatory compliance and standard-setting by an alliance of tech and audit companies, the widely discussed propositions around the need for algorithmic auditing would translate into the auditing of compliance software and its algorithms. Effectively, risk- and audit-based regulations, such as the AI Act, will open new markets for tech and compliance businesses. Products by so called “independent third-parties” will emerge on the market and provide solutions for algorithmic auditing in compliance with the AI Act. Ironically enough therefore – RegTech products that themselves serve compliance and audit, will be audited by both new and old actors within this market. Indeed, one begins to wonder about how workers’ rights and dignity can be preserved vis-à-vis this alliance of Big Tech and Big Audit.


We can trace this form of meta-governance and the ballooning of compliance-driven solutions to the emergence of internal control regulation, from mid-70s onwards. The introduction of this principle meant, to put it simply, that control bodies such as Labour inspectorates and other supervisory bodies would no longer inspect the realities on the ground, but instead largely control the control systems in place in organizations. When realities on the ground are no longer the primary focus of inspections, trust is instead placed into the formal design of compliance and control systems and in key intermediaries: auditors, lawyers, compliance officers, data, and other experts. The way in which organizations translate laws and regulations into practice is to a large degree left to their discretion – such as the choice of third-party multipurpose software for both compliance and worker monitoring.


This convergence of “surveillance capitalism” and “regulatory capitalism” poses a fundamental challenge. In many ways it has contributed to the monopolization of definition power by the employer and corporate interests. This epistemic dimension is often neglected in the struggle to protect workers but is fundamental in a world where we not only trust, but even worship data, while we look down upon qualitative forms of knowledge. The employer has disproportionate power and discretion to translate legal and regulatory texts into practice. We therefore see practices of so-called “creative compliance”, where the letter of the law is followed but its spirit is undermined and perverted. Laws intended to protect workers can even become weaponized against workers. The algorithmic architectures of control built in the name of compliance further reinforce this power of the employer.


Trade unions often find themselves in a battle against this epistemic power of the employers; they produce their own interpretations, reports, numbers, and expert knowledge to counterbalance the discursive power of professionals acting in the interest of capital. This is often difficult. What we are left with is a battle of intermediaries and experts. Upon my view, this technocratic and depoliticized battle not only glosses over the fundamental power imbalances but disavows and distracts from realities and harms on the ground.


Those of us trying to protect the basic dignity and rights of workers are, too, implicated in the reproduction of this form of governance. In our desire to control the forces of capital and their exploitative drives and in our desire to protect the worker, we often call for more of the same type of regulation – expecting different results. We, of course, wish to hold the powerful accountable, we want transparency, and so we call for risk-assessments, audits, impact assessments, investigations, for “human in the loop” in algorithmic decision making, we call for more data and disclosures.


But I think we are reaching a point where we need to ask – can more of the same regulatory logic, now commodified, really deliver justice and dignity for workers? Who will define and translate this compliance into practice? Who will be this human in the loop? Will it be the local HR department, or a trade union, or a data engineer? Can we expect workers to really understand the exhausting complexity of regulations and algorithms? Who, among us, can truly assess the validity of an algorithmic audit or human rights impact assessment? Who defines what is ethical, or an acceptable risk? Are audits even a real solution to the inequalities, injustices, and harms that we seek to tackle? Can AI be reasonably regulated as a mere product on the market when it is in fact reshaping the ways in which we are managed, governed, and policed – by corporations, employers, and states?


When we privilege data, audit, and transparency as solutions over knowledge and experience, we further reinforce the informational and data asymmetry between employers and trade union representatives. Often, the employer will make claims to having a holistic access to organizational data, a single source of truth about the organization and every worker, and hence better view; this data will be presented as objective because data-driven and total. Of course, this data is far from neutral or total, it leaves out anything that cannot and is not measured by the employer, pushing us into the well-known McNamara fallacy. This cultural privileging of data as a superior source of information has serious consequences. The qualitative, embodied, collective, and organizational knowledge of trade union representatives and workers is presented as partial, as incomplete, as emotive, as few stories of “disgruntled employees”, and hence not representative and thus irrelevant as a basis for decision-making. Harms posed by technologies and algorithmic management become easily dismissed or treated as a personnel matter, a case of problematic and deviant individuals – and thus outside of the scope of collective agreements. Algorithmic management, law, HR, all treat problems in an individualized, legalistic, and psychologizing fashion, which is now reinforced by digital infrastructures that produce granular data on individuals. Hence, despite the formal existence of codetermination, the collective dimension is often dismissed or strategically ignored by employers.


How can we then ensure that collective rights, protections, and justice are a reality and not a mere data-driven compliance fiction?


In my view, this will require us to ask hard questions about the limits and pitfalls of risk-based regulation. It will also require us to return to fundamental questions of power and politics. The contemporary RegTech landscape is populated by products protecting the employer from liability while expanding workplace surveillance. But there is, as far as I can see, no market for software to protect workers and ensure their rights vis-à-vis the employer. Instead, we see a function creep and a perversion of compliance into managerial tools for managing, surveilling and sanctioning workers. There is no software to alert the employer about his breach of collective agreement and co-determination rules, but there is a multitude of algorithmic solutions to alert the employer about exceeding the time limit for a toilet break or using “negative” wording in an email. The AI Act promises that such high-risk tech solutions will come under closer scrutiny – but who will be the auditors and, in whose interest will they act?


My deep worry is that this layering of auditing and self-disclosure is glossing over and disavowing the harms on the ground, allowing them to continue, if not proliferate. The human experience, the human costs, and the human as such become buried ever deeper in the regulatory and technological maze of control architectures which also make it ever more difficult to voice critique and dissent. How can we, within this governance landscape, fight for the rights and protection of workers and for real compliance with labor laws, collective agreements, and for real and meaningful co-determination? How can we ensure not only decent work but human flourishing for the many?


This is my question to all of you.


Thank you for your attention.


Speech begins at 08:16




bottom of page