Data and Justice in 2019 — Who can afford big tech, and who can live without it?

Linnet Taylor

Linnet Taylor

Linnet Taylor is Assistant Professor of Data Ethics, Law and Policy at the Tilburg Institute for Law, Technology, and Society (TILT). She was previously a Marie Curie research fellow in the University of Amsterdam’s International Development faculty, with the Governance and Inclusive Development group. Her research focuses on the use of new types of digital data in research and policymaking around issues of development, urban planning and mobility. She was a postdoctoral researcher at the Oxford Internet Institute, and studied a DPhil in International Development at the Institute of Development Studies, University of Sussex. Her work focuses on data justice – the development of a framework for the ethical and beneficial governance of data technologies across different regions and perspectives.

2019 has been the year that the data economy went fully global. This was not so much a transformation in terms of connectivity – nearly half the people in the world are not yet internet users, and if you are from a low-income country you probably only have patchy access to 3G if you are lucky. This was a change in the scale and reach of the world’s data infrastructures, to a point where no one is truly invisible any more. You may not be able to reach the connected world, but it can certainly reach you.

One clear sign of this internationalisation of infrastructure was the expansion of AI-enabled surveillance. A Carnegie Foundation report shows that 47 out of the 65 countries using AI surveillance are doing so with Chinese technology, though US and European firms are also providing substantial amounts. The entanglement of systems and infrastructures is creating an increasingly tight net of surveillance for citizens of almost every country, with effects of both control and profit. If you are Hong Kong citizen who has been out in the streets protesting during 2019, your citizen ID card (supplied by a French company) will have channelled your biometrics to the Chinese government’s facial recognition systems, which will have tracked and analysed your movements throughout the protests. If you are Zimbabwean, your facial features are now being used by Chinese tech firms to train their AI systems to recognise darker-skinned faces – useful both for politics at home, where AI is being used to track and control the Uighur population and for profit abroad, in those 47 countries currently buying AI surveillance from Chinese firms, many of whose populations are not light-skinned.

This internationalisation of infrastructure also shows up in the expanded reach of cloud services. In the coming year, much of the Kenyan data processed by business will flow through Amazon Web services, as the country joins South Africa in becoming the first Sub-Saharan linkages into the firm’s increasingly global cloud. Microsoft is also opening data centres in Africa – as we can see from Microsoft’s map, cloud services are reaching countries in descending order of income.

But here we see how being part of a global data economy doesn’t necessarily mean you get meaningful access to connectivity or services. Let’s compare those maps:

Microsoft’s cloud services:

…and Carnegie’s AI surveillance map:

What we see is that while you may not have access to the cloud, you can still be tracked and controlled by your government’s AI. This increase in the reach of data and analytics is even more noticeable for those who don’t have a country to call home. Refugees have become increasingly linked into the global data market over the last year as biometrics have become a condition for receiving aid and as huge tech firms such as Palantir have been invited to process data about refugees and service provision. Just as with non-refugee populations around the world, the provision of basic services and digital surveillance have become increasingly intertwined – see, for example, the beginning of the massive rollout of biometric ID across Africa, starting in Kenya with the registration push for its NIIMS database.
ID provision is a complex issue: people need ID to claim services, but this ‘you must have digital ID’ logic is increasingly getting bound up with the rights that underlie service provision – we don’t have the right to food or medical care because we have ID, we have ID because we have rights. This equation is frequently being turned around so that people can be cut off from the right to food, as in Yemen, or to freedom of association and political expression through their mobile phones, as with Bangladesh’s attempt to silence the Rohingya refugee population by cutting off their access to connectivity.

While we see states growing ever more entwined with corporate power in the political bargain around surveillance, we also see corporations starting to openly claim powers that previously belonged only to states, such as setting up a currency – and, just like some states, failing at it. If Libra fails, which seems likely, it will be due to resistance from regulators, central banks and political leaders, but it has provoked our first serious international debate over how much tech companies should be allowed to take on functions traditionally only undertaken by states.

Now that companies can become responsible for refugee service provision, or set up their own currencies, what is the big tech equivalent of being too big to fail? If a GAFA-level corporation running a chunk of the world’s now-essential services goes down through violence, politics or bad planning, whose interests are in play? Will we start to see situations where governments have to save tech corporations instead of regulating them, because those corporations have taken over such a substantial chunk of what used to be public information infrastructure? And if so, what will that mean for the public?

2019 has made it clear that the data market is not just an economic proposition but a tool of international political power. When the G20 discusses data protection it’s not because world leaders are concerned that WhatsApp is not private enough, but because defining which data can be traded means you can both allocate and control the power to trade. For this reason we saw a corresponding push against the G20 from India, Indonesia and South Africa to instead make the WTO the primary forum for setting the data market’s rules. We have seen this conflict mirrored in the data localisation power-play during 2019: what is called ‘data protection’ when done by the EU is denounced as ‘data protectionism’ by the G20 when done by a less wealthy country such as Nigeria or Indonesia.

In these moves to localise data, and in responses to it by the G20, we are starting to see a renewal of the Non-Aligned Movement, a grouping begun in the 1960s by countries who wished to remain neutral in the Cold War. In 2020, the NAM’s attempts to nationalise data are likely to progress – and to come into conflict with issues of data sovereignty. China has data nationalisation through its localisation laws, but the Uighur would claim that they lack data sovereignty because they are on the receiving end of unwanted analytics. Similarly, data could remain localised on a humanitarian organisation’s servers while being analysed, and the products of that analysis used by for-profit actors whose interests do not align with those of refugees. Nationalising data means a country can control who can use and profit from it – it doesn’t mean that data will be used in the interests of that country’s people.

A truly global data economy is going to be a political phenomenon as much as a financial one. Economically, just like the existing global economy it will empower those who have capital and infrastructure to profit from those who do not. Politically, however, 2020 may tell a different story. Last time the 120 members of the NAM were motivated to cooperate politically, we saw them shape the end of imperialism. What that kind of pushback would look like in relation to the data economy is not yet clear – although it would not necessarily be justice-oriented, as we can see from those states’ enthusiastic embrace of technologies of surveillance and control. An anti-colonial movement in the global data economy would be an unpredictable but potentially hopeful outcome of the next year.

About the project

Places and populations that were previously digitally invisible are now part of a ‘data revolution’ that is being hailed as a transformative tool for human and economic development. Yet this unprecedented expansion of the power to digitally monitor, sort, and intervene is not well connected to the idea of social justice, nor is there a clear concept of how broader access to the benefits of data technologies can be achieved without amplifying misrepresentation, discrimination, and power asymmetries.

We therefore need a new framework for data justice integrating data privacy, non-discrimination, and non-use of data technologies into the same framework as positive freedoms such as representation and access to data. This project will research the lived experience of data technologies in high- and low-income countries worldwide, seeking to understand people’s basic needs with regard to these technologies. We will also seek the perspectives of civil society organisations, technology companies, and policymakers.