Search
Close this search box.

Taking The Group Privacy Discussion Forward

Picture of Linnet Taylor

Linnet Taylor

Linnet Taylor is Assistant Professor of Data Ethics, Law and Policy at the Tilburg Institute for Law, Technology, and Society (TILT). She was previously a Marie Curie research fellow in the University of Amsterdam’s International Development faculty, with the Governance and Inclusive Development group. Her research focuses on the use of new types of digital data in research and policymaking around issues of development, urban planning and mobility. She was a postdoctoral researcher at the Oxford Internet Institute, and studied a DPhil in International Development at the Institute of Development Studies, University of Sussex. Her work focuses on data justice – the development of a framework for the ethical and beneficial governance of data technologies across different regions and perspectives.

What follows is an account of the group privacy session at the Amsterdam Privacy Conference on October 8, 2018. This is the precursor to a new round of debates on this topic with a view to producing a book or journal special issue. It links to this previous discussion, which resulted in the book Group Privacy: New Challenges of Data Technologies.

Where we are now:

We began with an outline of where the process of developing the notion of group privacy is so far. The process of producing the book on group privacy which came out in 2017 did not result in a consensus amongst the lawyers, philosophers and social scientists involved as to whether we could speak of ‘its privacy’ rather than ‘their privacy’ with regard to some kinds of groupings. We focused on groupings produced through algorithmic and remote-sensing techniques, as once we think of the way analysts or algorithms create groupings for the purpose of advertising, insurance or security risk, the idea of ‘its privacy’ starts to move toward centre stage.

A group of communications scholars working with a computer scientist (Suh et al., 2018) have taken an experimental approach to the idea of ‘its privacy’. Using hypothetical apps, they empirically test the idea that people have a sense of group privacy operating alongside their feeling about their own individual privacy. They found people did become concerned about group privacy issues, and that this affected how they felt about regulating the technologies in question – it was only group privacy issues that were associated with a preference for greater oversight.

The discussion at the Amsterdam Privacy Conference was led by a group of disciplinary experts: Sjaak van der Geest, Payal Arora, Jo Pierson, Valerie Steeves, Linnet Taylor and Sandra Wachter, with Bart van der Sloot also providing input.

The first part of the session was a discussion about adding new disciplines into the group privacy debate. Sjaak van der Geest brought insights from his work in medical anthropology, where members of a family or village group will protect themselves against ‘courtesy stigma’ when a relative or neighbour is afflicted with a source of stigma, such as AIDS. Here the group pressures the individual into invisibility to protect itself from stigma.

We also discussed Payal Arora’s work on privacy in the Global South: how the idea of ‘the poor’ as a grouping is an important one because it is used instrumentally in the development sector to exert power in the form of classification and intervention. Many of the global poor and especially young people, who predominate demographically in lower-income countries, are considered either as not meriting digital privacy (framed as a luxury) or not interested in it. Arora’s work shows how they use their visibility strategically to form connections and identities beyond their immediate communities. It also shows the strategic use of groups by micro-credit lenders over decades, who leveraged the social surveillance strategy of giving loans to a group so that the other group members would pressure individuals to make repayments. It backfired, however, by over-pressuring those who could not pay back, often making borrowers’ lives unliveable.

The other perspectives added to the debate this time came from surveillance studies (Valerie Steeves), media studies (Jo Pierson and Payal Arora) and critical studies of AI (Sandra Wachter).

Themes that emerged from the expert discussion:

Group rights and identity building: Research by Pris Regan has shown how the social value of privacy intersects with group rights, but we should also think about the intersection of human rights with data protection in this debate. To balance the EU perspective on data protection we should consider the Canadian system, which separates out data protection (whose job is to keep the data market functioning and focuses on individual data subjects) from human rights, which can consider groups. The exercise of asymmetric power by holders of data over groups can be seen as something inherently wrong, and contrary to the core human rights principle that we join groups that have meaning for us, and that help us build our identity in ways that we choose.

Research methods: Group privacy makes us think about our own standpoint as researchers: we categorise people in order to study them, and methods that are respectful of communities we want to interact with are hard to develop, because privacy is so messy. By asking people questions about their privacy, you are already invading it. Methodologically Steeves’ group has arrived at participatory action research methods and deliberative dialogue. The best way to get at privacy invasions is not to define people: we can do better research by grouping people according to the conversation they are interested in joining, rather than making them align themselves with particular categories that may be stigmatising or not the most representative of their identity.

Deleuze speaks of ‘living in media’, the idea that media become so domesticated it is hard to identify when we talk about individuals and when about groups. To understand group privacy we may want to research how algorithms come into existence, and what groups we take into account when training them. Technical experts often think they have a representative sample when they do not, so we need to explore how to build in data quality using old fashioned social science, before algorithms are tested and used.

Intersectionality: in terms of methods, intersectionally, there is a tension between reifying categories and imposing them on people, but commonalities are important because they give people strength. Intersectionality raises the issue that the law performs categorisation that sometimes mirrors data analytic groupings: issues that arise from being both in one framing and another simultaneously creates a gap, so intersectionality is about the gap, not the overlap – about the problem of pre-sorting – and may therefore have something theoretically to offer to discussions of group privacy.

Legal categories of rights in relation to data: Sandra Wachter asserts that these don’t work well any more because people now need a right to co-determine the way they are evaluated. Currently in the EU we have rights over input data but not output data, so that the one processing the data has the right to determine how we are categorised. It has been shown in migration law decisions that inferences and assessments are not personal data, and that the data subject is not in a position to assess whether they have been characterised accurately. So there is no authority I can go to to rectify an incorrect assessment of me. We can address this by thinking about privacy in relation to data in a more traditional way: the way the European Court of Human Rights thinks about it. In contrast, the European Court of Justice thinks about it in terms of data protection, i.e. input data, rather than identity and the ability to change it. If you want to say something about me, you have to justify it, I don’t have to justify my own truth. This would mean we couldn’t only focus, in legal terms, on data collection.

Privacy, data and boundaries: the current disjunctures in rules around privacy leads to boundary turbulence (see Sandra Petronio’s work on this). As well as personal boundary coordination and intersecting boundary coordination between individuals, there is also unified boundary coordination which involves groups setting boundaries for their privacy. Protecting privacy must be a group practice – bringing in more than just the developers of a system, including all those who can direct, shape and ground truth the data and the products of that system. Big data analytic groupings also have potentially negative effects on group solidarity precisely by breaking down the boundaries of existing social configurations in favour of data-driven groupings.

Ephemerality of data analytic groupings/stability of underlying social groupings: Shifting affiliations are not just a problem computationally, but also in real life. We don’t have good legal responses to this and data-driven methods are therefore left completely unregulated. We can apply rights to some groups, such as churches which can own property and have the right to assembly. It matters to have a right to reasonable inference (see Sandra Wachter’s work on this) specifically in the online world because online information and opinion can be distributed and copied in an unlimited way. If we have the right to be forgotten, we should also have the right to influence how we are being assessed.

Data analytic groups’ ephemerality is problematic methodologically because some ways in which they may cause harm are related to real-world groupings which are unfortunately quite stable over time: for low-income people living in precarity, these groupings are deeply stable.

We discussed how technologies developed with a marketing/advertising aim may create harm through their ability to create groupings: WhatsApp having to modify its product in India and other countries because the groupings it made possible were driving violence through propaganda distribution. The advertising logic of platforms built to reach as many people as possible becomes untenable in the face of the legal and rights aims of protecting people from violence. How, then, should we distinguish mobs, crowds and groups? A mob seems ephemeral and temporal but has deep roots in stable groups. Sri Lanka rumours about Muslims trying to sterilise people comes from established tensions. Myanmar Rohingya is deep rooted too. So the treatment of these problems as ephemeral is not addressing the history of actual insecurities and cross-group violence.

Group rights and privacy: Privacy can be argued to be the necessary condition for other (group) rights to be exercised, e.g. freedom of assembly. Although individuals must claim rights under the law, there are aspects of law that are predicated on group problems. For example, Canadian law on sexual assault has section 15 which says it is a group problem of privacy, although the rest of it is the liberal individual. The courts have concluded that women are treated differently as a group, and if we pretend they aren’t, we will get the wrong answer when we deal with these issues in law.

Watch this space for further discussions over the next months!

(Cross-post from Linnet’s blog)

About the project

Places and populations that were previously digitally invisible are now part of a ‘data revolution’ that is being hailed as a transformative tool for human and economic development. Yet this unprecedented expansion of the power to digitally monitor, sort, and intervene is not well connected to the idea of social justice, nor is there a clear concept of how broader access to the benefits of data technologies can be achieved without amplifying misrepresentation, discrimination, and power asymmetries.

We therefore need a new framework for data justice integrating data privacy, non-discrimination, and non-use of data technologies into the same framework as positive freedoms such as representation and access to data. This project will research the lived experience of data technologies in high- and low-income countries worldwide, seeking to understand people’s basic needs with regard to these technologies. We will also seek the perspectives of civil society organisations, technology companies, and policymakers.