Search
Close this search box.

Adjudicating Data Justice

In the span of less than a week, two important judicial decisions in Kenya and the Netherlands have demonstrated why a data justice framing is useful for analysing problems of technology in society. These two cases have kindled debates not just about data protection but also about technology’s mediating role in state-citizen relations. When states apply technological systems to provide basic functions such as registration, welfare provision, security and representation, these systems inevitably have a political character. Whom do they recognise and whom do they exclude? Does the state get to monitor and surveil people in the context of these basic governmental functions, and, if so, what should the limits be? If it is monitoring in real-time using digital surveillance, should it primarily be looking for errors and misuse, or checking that people are receiving their entitlements? Both the Huduma Namba (Kenya) and SyRI (Netherlands) cases have something to say about how states are using their power to monitor and surveil, and to what extent people have a right to limit or reject that scrutiny.

In the case of the Netherlands, the system in question is a risk assessment tool[1] run by the Dutch government, designed to detect tax fraud, labour law fraud and inconsistencies in benefit claims by welfare recipients. Focusing on benefit fraud in particular, the main criticism brought against SyRI is that it criminalises poverty and disadvantage, along with being a completely opaque system rendering it impossible to check for discrimination or provide insight into who is being flagged as risky, and why. Despite being described as an anti-fraud system, critics have pointed out that SyRI effectively equates fraud risk with errors in filling out benefit applications, even though anyone might make errors when completing complex forms for tax and subsidies. In addition, the system is likely based on correlations and patterns which are supposed to demonstrate a connection between certain behaviour and fraud, like checking water consumption of an address against their norms for the supposed consumption of the listed household, targeting in practice groups that display behaviour or characteristics deemed suspicious by the risk model. A large portion of these data sources point towards low-income recipients, at the same time the projects in which SyRI is deployed take place in geographical areas correlated with low-income groups. By being hyper-sensitive to inconsistencies, and including a wide range of administrative data sources on each individual, the system provides a powerful surveillance environment that trains a spotlight on the poorest and most disadvantaged, and pre-emptively criminalises them all.

The case brought against the National Integrated Identity Management System (NIIMS) system in Kenya (popularly known as the Huduma Namba after the single ID number it promises Kenyans) brings up a different, but related, question. Just like SyRI, it exposes a tension between the government’s responsibility to deliver services and entitlements, and its responsibility to do so efficiently without wasting resources, and constructs an extensive net to monitor people while serving them. The new biometric registration system centralises records on welfare and health service provision, passport applications, mobile SIM registration, and voter registration. This is not uncommon, with similar systems already in place in India, Argentina, Nigeria and other countries, and development actors declaring ambitions to ‘empower and protect citizens through digital identification technology’ across Sub-Saharan Africa.

The claims made against Huduma are that its apparently comprehensive system actually excludes the marginalised, by effectively defining anyone who cannot register as a non-citizen. This includes ethnically marginalised citizens such as those with Nubian heritage, refugees or other displaced persons, migrants, people who lack prior registration documents and anyone who is unwilling to register because the system may link their fingerprints to misdemeanours such as petty theft or visa infractions committed at some point in the past. There is thus systematic exclusion for a variety of different reasons, combined with issues of privacy violation due to the (now shelved) plans to include DNA records and GPS details of people’s dwellings.

Though they are very different in nature, these systems share some important characteristics in terms of data justice concerns. Each system includes a Christmas tree-like structure where new sources of data can be continually integrated, and in each case this makes it exponentially more likely that as more data is added, errors, inconsistencies and minor infractions will be found. It also makes it harder for the person involved to correct mistakes or have any infractions removed from their record. The database becomes a hall of mirrors in which past misdemeanours or errors refract infinitely through the system. Both systems include people in registration in order to exclude the undeserving from service provision; both effectively minimise the possibility of appeal for those treated unjustly; and the design of both focuses any unfair effects specifically on disadvantaged groups.

Another important issue that the cases have in common is that the claims being made go beyond data protection to constitutional and rule of law issues. Tijmen Wisman, a critic of the Dutch system, noted this week that the main problem with SyRI is that it amplifies the power of the state and brings up issues of ‘the rule of law and the individual security of citizens’, posing a ‘danger of arbitrary and opaque interferences’ through the automation and remoteness of the judgements made about people. The SyRI system raises important questions regarding the transparency paradox of states gathering increasingly more information about people but at the same time creating increasingly opaque systems, making it difficult for those people to exercise any fundamental rights, such as rights to non-discrimination, fair trial or to uphold the presumption of innocence and scrutiny. The judgement by the Court clearly demonstrated that the issues go beyond data protection: it explicitly discusses transparency for non-discrimination and due process purposes, and frames issues of data processing in terms of having an impact on privacy, more specifically autonomy, self-determination and non-discrimination. Similarly, the Kenyan court has judged that data protection is not enough to limit government power in the case of the Huduma system and that other protections must come into play to limit the system’s potential for amplifying inequality and discriminating against the marginalised. In both cases a broad coalition of opponents representing a range of public and group interests has arranged itself in making legal claims against the system (in the Huduma case, the Kenya Human Rights Commission, Nubian Rights Forum and the Kenya National Commission on Human Rights; and in the SyRI case, the Dutch Legal Committee for Human Rights, FNV Trade Union, Privacy First, Stichting KDVP (a patients’ rights organisation), and the National Client Council), suggesting that the claims are important on different levels beyond privacy.

The Kenyan High Court’s decision on Huduma will be published in the coming days, but several things are clear about the judgement. First, it brings limited comfort to anyone who is against intrusive forms of civil registration per se. Just as in the 2018 Aadhaar judgement in India the system itself has been judged lawful on the basis that the state is allowed to collect information on people in order to govern them. Similarly, with the SyRI case in the Netherlands, the court did not find the idea of such a system inherently problematic. What all three judgements have in common is that they require that if a state wants to impose a new type of datafied intervention on the general public, it must first (at least) establish a sufficient legal framework to do so. This affirmation of the rule of law in relation to technological intervention is something to celebrate, but is not enough to prevent the misuse of identification surveillance technologies. Many bad and oppressive laws exist around the world: there are more fundamental principles that we should consider.

In each of these three judgements, the court found that even if the central justification for a system is credible, it cannot be used to demand that people comply with an array of added demands such as DNA and GPS data provision. Unless the point of this sort of data collection is clearly related to the central aim of serving the people, the government must pass separate laws in a democratic process to justify it. The Kenyan court’s decision, so far, does a similar job to the Aadhaar judgement in terms of defining the reach of government power and making its boundaries clear. The demand that the government establish a clear legal framework for the system’s collection and use of data is important because it goes beyond a reliance on data protection to do work that is constitutional in nature: mediating government power and directing it towards particular ends.

What each of the judgements leaves open, however, is how we should determine whether systems of datafied control are just. In the Netherlands, the court addressed privacy clearly and set some standards for transparency and scrutiny or oversight by third parties, but left out the question of whether designing digital surveillance to unequally penalise the poor and disadvantaged is acceptable in the first place. An exclusive focus on privacy is not the best route to challenging the legitimacy of surveillance if the claims being made are to do with justice, fairness and rule of law. Forcing us to pick a grounds for our objection is something inherent to legal challenges. If we want to have a broader discussion we would do well to move it out of the courts and into the political sphere.

[1] The ontological debate within the judgement about what SyRI ‘actually is’ may be of particular interest to STS scholars

About the project

Places and populations that were previously digitally invisible are now part of a ‘data revolution’ that is being hailed as a transformative tool for human and economic development. Yet this unprecedented expansion of the power to digitally monitor, sort, and intervene is not well connected to the idea of social justice, nor is there a clear concept of how broader access to the benefits of data technologies can be achieved without amplifying misrepresentation, discrimination, and power asymmetries.

We therefore need a new framework for data justice integrating data privacy, non-discrimination, and non-use of data technologies into the same framework as positive freedoms such as representation and access to data. This project will research the lived experience of data technologies in high- and low-income countries worldwide, seeking to understand people’s basic needs with regard to these technologies. We will also seek the perspectives of civil society organisations, technology companies, and policymakers.