Search
Close this search box.

Fairness and AI governance – responsibility and reality

Picture of Linnet Taylor

Linnet Taylor

Linnet Taylor is Assistant Professor of Data Ethics, Law and Policy at the Tilburg Institute for Law, Technology, and Society (TILT). She was previously a Marie Curie research fellow in the University of Amsterdam’s International Development faculty, with the Governance and Inclusive Development group. Her research focuses on the use of new types of digital data in research and policymaking around issues of development, urban planning and mobility. She was a postdoctoral researcher at the Oxford Internet Institute, and studied a DPhil in International Development at the Institute of Development Studies, University of Sussex. Her work focuses on data justice – the development of a framework for the ethical and beneficial governance of data technologies across different regions and perspectives.

This essay is about a central and growing problem in AI governance: the gap between what people are experiencing from AI technologies, and regulatory and policy discussions about those technologies. I gave a talk at the GOAL project recently (an EU research project on governance of, and by, algorithms) about how the claims of political movements such as Black Lives Matter, MeToo, and organising by gig economy and big tech workers represent core concerns for AI regulation but are not recognised as such.

This problem has been growing for a decade while researchers have been wrestling with the question of what fairness, accountability and transparency mean in relation to AI, and how they can be operationalised by those building and regulating technology. It bloomed in its full complexity over the last couple of years, as the Fairness, Accountability and Transparency conference series was adopted by the powerful ACM computer science association, and to some extent legitimised an interdisciplinary approach to fairness, rather than treating it as a statistical problem within computer science. As this was happening, a group of scientists working on this as a computational problem split off to form the (literal) FORC conference – Foundations of Responsible Computing – declaring that the social scientific and legal/philosophical approaches to the problem were not productive and the focus should be on defining and formalising requirements for computing systems and models.

This split highlights the contested, often dysfunctional nature of the discussion of what constitute fairness and responsibility with regard to computing and AI. It can be encapsulated by stating two fundamentally incompatible interpretations of fairness:

First, a solvable problem of distributing resources to achieve a particular state of (individual or group) parity (see, for e.g. Liu et al. 2005)

Alternatively, an intersubjective problem of justice and equality with constantly changing goalposts:

US Supreme Court Justice Ruth Bader Ginsburg said that she was ‘often asked how many women on the Supreme Court would be ‘enough’.

Her answer? ‘When there are nine.’

‘For most of the country’s history, there were nine and they were all men. Nobody thought that was strange,’ she explained.’

This poses a problem. Fairness must be formalised because doing so provides a target for the creation of standards and requirements so that algorithmic systems can (hopefully) be regulated. It’s also worth doing this because searching for conceptual clarity makes the discussion more inclusive of technical disciplines and applied legal perspectives, which are germane to preventing problems with AI. However, fairness must not be formalised because fairness does not tell us anything about whether a particular application should be deployed in the first place. In practice, many of us working in this field are constantly seeing how formalisation of standards/requirements such as fairness can be used strategically to distract society and legislators from more fundamental issues.

As Os Keyes and collaborators have shown in a brilliant satirical take on the problems of AI and fairness, algorithmic fairness and responsibility discussions, which have until now been based on the idea that essential concepts could be isolated and formalised into requirements for developers and engineers, are inadequate to the task of reflecting the concerns of different publics with regard to AI. Instead, what we get is a race to the middle: a set of watered-down aims and requirements which do not address the problem that some systems are fundamentally undesirable.

A striking example of this is the EU’s own process for defining the foundations of AI regulation: the High-Level Expert Group on AI, tasked with establishing the principles that should shape governance and law of the EU’s AI work. Famously, this process came up with ‘red lines’ – tasks that AI should not be applied to – including identifying and tracking people, covert (hidden) AI systems, AI-enabled citizen scoring in violation of fundamental rights and lethal autonomous weapons systems. Happily some of these are making a comeback in the current (leaked) draft regulation on AI, but at the time, HLEG group member Thomas Metzinger issued a searing critique of the co-option of the process by corporate interests who effectively vetoed the idea of red lines in the final draft.

Reframing AI governance to include contestation

All this leads up to a simple observation: the AI governance process we see in the world is surrounded by political and often visceral contestation, centring on the claims of groups whose interests have been ignored and marginalised for much longer than AI has been around. AI threatens to sharpen these forms of marginalisation precisely at a time when it is becoming possible to contest them, and this is creating a storm of protest around its governance. Technology governance has not done a good job so far of being inclusive of social justice claims, partly because it is done by lawyers and civil servants in reference to the work of natural and mathematical scientists and business – none of these being occupations where social justice claims have traditionally been considered pressing policy concerns. Instead the politicised nature of these claims has usually been seen as inappropriate and ill-mannered in the technology policy sphere, and the groups making them have been actively excluded as belonging to a different, primarily negative mode of thinking about technology governance which does nothing but create obstacles to achieving economic, scientific and geopolitical policy aims.

However, it is becoming clear that these claims cannot be dismissed or even decentred. They are too intrusive and their logic derives from social movements which are increasingly being normalised as policy concerns – racial and ethnic justice, gender-related rights claims, and claims to inclusion by many different groups and interests are harder for those in power to ignore than they were even five years ago. Nancy Fraser talks about this process as one of ‘Abnormal justice’, where post-globalization, the framing of justice claims has been disrupted and ‘deviation becomes less the exception than the rule’.

Fraser highlights disruptions in various dimensions relating to justice claims. First, geography, where both claims and the agency for redress now often extend beyond the territorial state (for example climate change, or the platform economy); next, the subjects of justice claims have changed: instead of being individual legal subjects, they may now be collectives such as social movements, making the claims more political than specific. Fraser also sees a change in who gets to determine what is just, and whose interests should be considered – here we see the move from local to broader and even global publics – and in the nature of claims to redistribution or redress: these have gone from being conceptualised as primarily economic to also being cultural and political. Along with these changes, we have also seen shifts in the kind of wrongs that generate claims to justice: what used to be discussed in terms of class and ethnicity now also extend to considerations of nationality, race, gender, sexuality, (dis)ability…

This theory of abnormal justice is useful in thinking about current claims in relation to AI and justice. We are seeing these emerge in public debate in ways that technology regulation has not encountered before, with resulting pushback from the scientific, as well as the regulation and policy communities. Here are some of the key examples:

First, there’s visible, consistent public pushback by under-represented groups against the growing power of AI systems and the way they frame the world, often in the form of active conflict between traditionally marginalised groups and established interests. One very clear example of this is the firing by Google of Timnit Gebru which, if you read carefully, also involved concerns about climate and unsustainable use of computing resources in Google’s large language models, and concerns about the structural injustices reflected in AI research in general, all of which caused Gebru and colleagues to be viciously mobbed by various leading figures in commercial and academic AI research. This conflict was not only about the politics of AI, but about AI as politics, and illustrated every one of Fraser’s points about shifts in justice claims.

Second, and related to this, connections are being made by activists and researchers between claims against discriminatory AI and broader political and social movements. The work of OurDataBodies, Data for Black Lives and other groups such as Extinction Rebellion are calling attention to how technology, and AI specifically, raises social justice concerns around civil rights, climate change, precarity in labour markets, criminal justice, housing, welfare and many other issues.

One particular feature of the new form of rights claims in relation to technology comes from within in the form of employee resistance: examples include employee walkouts, letters and protests around gender & racial discrimination/harassmentclimate justiceprecarisation of labour, and migration and related human rights violations.

Even within the scientific research activity that underpins the work of tech firms, we see different forms of contestation than we did a decade ago, supported by broader movements around feminism, racial justice and climate justice in particular. Examples of this include the Design Justice network, which engages with those conceptualising new technologies through the lens of problems caused by their deployment; and the decision by the ACM FACCT conference to introduce interdisciplinary cross-review between computer science, law and social science papers, and by the NeurIPS conference series to introduce requirements for authors to write about the broader impact of their paper, including ‘ethical aspects and future societal consequences’. The last resulted in a flame war between emeritus computer science professors and the conference organisers about ‘cancel culture’ and the (ir)relevance of ethics to technology research, ending in a resounding smackdown of the former by their university on the basis that it was not viable to educate scientists today without including consideration of ‘the very real impact that technology can have, especially on marginalized communities’.

In light of all this, the contestation of the private-sector capture of the AI HLEG’s regulatory and ethics discussions by Thomas Metzinger should seem less like a surprising break with tradition and good manners, and more like the inevitable breaking of a dam that had been riven with cracks for years. These conflicts, if they demonstrate anything, show that there is no ‘tech sector’ any more: these companies’ business strategies, economic power, geopolitical influence and direct political engagements cannot be addressed as confined to a particular sector. Instead they are being addressed by society in terms of their direct effects on both the sectors where they engage, and the social groups involved in those sectors (which in most of these cases means pretty much everyone).

They also show that the rational, empowered, individual legal subject envisaged by regulation does not exist, particularly with regard to the increasing overlap of AI with issues of social, environmental and economic justice. Instead it forms part of the tissue of economic, technological and political power, and consequently also of structural disempowerment.

Under these circumstances, conflict and contestation are not a bug but a feature, and AI governance processes have to take account of them. This requires organisational and political attention, and the restructuring of governance processes: naming the big problems is necessary in order to address the smaller ones. So if we want to learn from the ‘fairness phase’ of AI governance, we need to find ways to connect structural problems, and their societal expressions, with law and regulation. We need to move from requirements to political engagement of various kinds. This means that fairness and responsibility are necessary, and in fact central, but that they’ve been conclusively demonstrated to be insufficient in their current thin, unenforceable configurations. If the regulation and law discussions going on around AI cannot take account of this, we are all in trouble – if only because the groups who are contesting AI represent interests that pretty much cover all of us, mostly in multiple ways.

The political demands AI governance has to (learn to) incorporate:

The focal points of conflict in AI governance at the moment seem to be the following:

1. High-stakes interventions where challenges to the system are impossible or ineffective

These are cases where either the people affected don’t get the chance to know they are being subjected to automated technological decisionmaking (or decision support, which can amount to the same thing despite being legally separate). They include the application of AI in criminal justice on issues of parole, bail or sentencing, such as the now-famous case of crime prediction analysed and contested by the ProPublica group; the use of AI to provide ‘evidence’ in migration and asylum processes, and welfare fraud prediction such as that covered by Virginia Eubanks in her book Automating Inequality.

2. Interventions that contravene scientific evidence

We are seeing people protest that some of the research and even deployment of AI is on issues that are completely unsupported by scientific evidence, if not conclusively disproved. An example of the former is predictive policing, which when subjected to analysis turns out not to be delivering on its promises but instead to represent group profiling that feeds on and reproduces structural inequality without achieving reductions in crime. Another high-profile example is that of the field broadly known as phrenology: AI-enabled facial, vocal, gait, typing, etc. behaviour recognition used for judging mental states and propensities, e.g. ‘trustworthiness’, sexual orientation, criminality, age, place of origin, etcetera etcetera etcetera (and believe me, there is a lot of etcetera out there).

3. Systems that optimise for illegitimate outcomes

Algorithmic systems which optimise for polarisation, misinformation or the exploitation of vulnerability can be considered illegitimate in a political sense, and are seeing substantial pushback by groups around the world. These include social media algorithms that optimise for the most extreme views and channel people into loops of extreme content and practices of predatory inclusion in exploitative finance.

4. Systems and infrastructures that impact negatively on our collective future

Here we come to the broader issues: the sustainability of computing infrastructures that use massive amounts of energy

How can these claims be integrated into governance processes?

All the issues named above are discussions about red lines, not just about how ‘we design for X’, X being particular values or optimising for particular outcomes. They represent the everyday battlegrounds in relation to AI technologies, the weight of harm and inequity present in the world – though seldom in the world of those who do the governing. This is a classic problem of social justice, where those who suffer harm from something are socially, economically and often even spatially remote from those who conceptualise governance of that thing. The point of protest is both to practice solidarity amongst those affected, and to bring these different worlds together – to make visible the experiences of those who are invisible to the legislator. Protest also creates costs for the powerful with regard to a given intervention. If a government decrees that facial recognition is to be deployed everywhere in my country as pre-emptive crime surveillance, and a huge crowd of people march on parliament, getting into the news reports and spurring debates about crime control and human rights, that makes the technology visible in a way that increases the political costs of deploying and sustaining it.

Traces of worldwide protest can be seen in the EU’s draft AI governance regulation. The draft regulation outlines ‘prohibited practices’ in language that is very vague in terms of performing technology regulation, but quite specific in its aim at effects on people, rather than uses. What undermines it is the axiomatic nature of EU regulation – it is made and passed by states and their representatives, and the draft regulation lets states entirely off the hook if they legalise something prohibited in order to deploy it for national security purposes. So doing completely prohibited things is unlawful, unless the government makes it lawful. Then it’s fine.

This is, naturally, not going to fly given that a substantial amount of the uses of AI technology that are currently evoking protest worldwide are by government – or by companies working as government contractors.

What does this mean, then, for the future of AI governance? It implies that social movements will be involved, if not directly then indirectly, in determining what constitute red lines for AI. It also implies that if powerful parties are excluded from those red lines, that regulation will destabilised as a reference point and not taken seriously by the public, who can see it failing the reasonable demands of the majority.

This also tells us something about what form the sphere of public reason for AI governance needs to take if the governance that it produces is to be credible. Regardless of whether legislators are comfortable with it, the conventional, essentially closed discussion between national and EU legislators, informed by consultation with private-sector stakeholders and (mainly legal) academic experts, has to be opened out to include consideration of a much broader set of actors. These include the public sector institutions which are increasingly on the leading edge of at-scale mistakes with AI; academic experts who research the social and political effects of digital technologies; and civil society organisations who are actually the ones who can define who is at risk from what, and how. The last should not be underestimated – CSOs should be central to discussions of what might be harmful, and how to tell. Their voices should not be equal with those of industry, but should be central in advising regulation.

To finish – Nancy Fraser outlines the new state of abnormal justice as one where claims require a response beyond economic redistribution: instead, they require recognition, representation and redistribution. If recognition of ‘new’ voices in technology regulation does not happen, even greater dysfunctionality and harm will result. If the demands of social movements – and the people making those demands – are not integrated into the practice and institutions of governing AI and tech, there will be more, and intensified, breakage of the kind we are seeing around us now.

About the project

Places and populations that were previously digitally invisible are now part of a ‘data revolution’ that is being hailed as a transformative tool for human and economic development. Yet this unprecedented expansion of the power to digitally monitor, sort, and intervene is not well connected to the idea of social justice, nor is there a clear concept of how broader access to the benefits of data technologies can be achieved without amplifying misrepresentation, discrimination, and power asymmetries.

We therefore need a new framework for data justice integrating data privacy, non-discrimination, and non-use of data technologies into the same framework as positive freedoms such as representation and access to data. This project will research the lived experience of data technologies in high- and low-income countries worldwide, seeking to understand people’s basic needs with regard to these technologies. We will also seek the perspectives of civil society organisations, technology companies, and policymakers.