Search
Close this search box.

What SyRI can teach us about technical solutions for societal challenges

Picture of Marijke Roosen

Marijke Roosen

Influential intellectual Harari is warning that technological revolutions pose unseen challenges to our societies, arguing that these will impact amongst other things our conception of democracy. Technology makes it easier to control and monitor citizens and in doing this consolidate power to the powerful. Algorithms perform better the more you centralize data about individuals, and thus the less you respect their privacy.

A powerful illustration of these challenges is the use of digital technologies in the domain of social welfare. UN Special Rapporteur on human rights and extreme poverty Alston uses the concept digital welfare state – as part of digital governance – to refer to the worldwide tendency to use digital data and technologies in the context of social protection. Acknowledging the potential appeal of using digital data and technologies in this domain, Alston warns for “a grave risk of stumbling zombie-like into a digital welfare dystopia”, a dystopia where citizens are increasingly visible for their governments, but not the other way around. Social welfare is used as a playground for experimenting with digitization, as it can easily be framed as efforts to increase the general wellbeing of a country’s population. However, the reality is that new technologies facilitate, justify and shield welfare reforms as part of neoliberal policies. This demonstrates that technologies are not neutral but politically charged. “The risk is that the digital welfare state provides endless possibilities for taking surveillance and intrusion to new and deeply problematic heights”, Alston continues.

A current example demonstrating the challenges of the digital welfare state is the Dutch SyRI system. In October 2019, eight parties[1] brought the Dutch government to court regarding its use of SyRI. What exactly is SyRI? The acronym is short for Systeem Risico Indicatie, translated as System Risk Indication. The risk indication refers to risk profiling in the field of social security, tax and labour fraud. The system has been regulated in 2014, regardless of the concerns being raised in terms of citizen privacy issues. SyRI uses existing data available to the government and links these data, in order to predict the likelihood of a person committing social security, tax or labour fraud (for a more elaborate discussion of the history and regulation of SyRI, see Van Schendel, 2019). SyRI’s purpose together with the data used for this purpose are very broad and the government can use whatever data it has at its disposal. The government furthermore does not provide transparency into the models used for these risk profilings. In other words, we do not know what data are being put into the model and how the model reaches a decision.

Enough reason for concern. The plaintiffs in the SyRI court case argued that SyRI is an unforeseeable, non-necessary, disproportionate and therefore inadmissible interference with the citizens’ private lives and thus a violation of Article 8 of the European Convention on Human Rights (ECHR). Article 8 protects citizens against disproportionate and arbitrary interference of public authorities in citizens’ private lives. Limitations of privacy must be regulated, serve a legitimate purpose and be necessary in a democratic society. If the government wishes to apply SyRI, there has to be a pressing social need to do so and the limitation of the citizens’ privacy must be proportionate to the legitimate purpose, justifiable and relevant. In this sense, SyRI is problematic, because of its secrecy surrounding the data that is being used and the way these data are processed, but also for its proactive targeting of vulnerable populations, leading to discrimination and stigmatization of those persons with low income, low socioeconomic status or a migration background. Having no insight into the processing of their data by SyRI, it becomes difficult for citizens to defend themselves against SyRI’s output. Citizens have to defend themselves against a system of which they do not and cannot know how it operates. To add to the problem, SyRI appears to be ineffective, as it did not succeed in identifying actual fraud, and the State also did not check whether it fulfills the demand of subsidiarity, because it did not check whether there are less invasive technical alternatives for SyRI. Plaintiffs not only problematize the current use of this automated decision-making system on vulnerable populations, they also see it as a harbinger for using automated risk profiling systems in other segments of the population as well, serving a variety of purposes. “SyRI is just a first step in the direction of the perfection of the control society”, the plaintiffs argue.

On 5 February 2020, the Dutch court ruled on the SyRI case. In her court ruling, it recognized that social security is one of the pillars of society and it is an important contributor to the Dutch welfare. Combating social fraud is crucial to maintain citizen support for the Dutch welfare state and combating fraud is the central goal of SyRI. New technologies provide the government with the increased opportunities to exchange data in order to combat fraud. This includes digital technologies as well.

The court states that new technological possibilities ought to be used to combat fraud. The SyRI legislation therefore serves a legitimate purpose. Developments in new technologies however contributes to the increasing meaning of the right to protection of personal data. Protecting citizens’ rights to privacy is crucial for the trust of the citizens in the government, the court argued. A lack of sufficient and transparent protection of the right of privacy leads to a chilling effect: citizens will become less likely to be willing to provide data and there will be a decrease in support for the government’s actions.

The court furthermore states that it is the government’s responsibility to find the right balance when applying new technologies, which also applies to SyRI. The court weighted the content of the SyRI legislation to the violation of the privacy. It was argued by the plaintiffs that SyRI uses pattern recognition, data mining, risk profiling and big data. According to the government, SyRI makes a comparison between existing, factual data. The factual data are compared using simple decision trees. The court is unable to check the correctness of what SyRI is exactly, because this is not made public. Neither are the risk indicators from which the risk is calculated or the data that are used. That is a deliberate choice. The SyRI legislation does not foresee in an obligation to inform citizens about the data and the processing of data, nor whether a risk has been reported. The court ruled that in the case of SyRI, there is a structured processing of existing data which are available through government agencies. There is no limitation on the data the government can use for its purposes. Existing data are compared with risk indicators. Even though SyRI does not rely on deep learning, it is possible that it will do so in the future.

The court evaluated the SyRI legislation in light of Art. 8 of the ECHR. Is there a fair balance between the social interest and the violation of privacy? The court ruled that the guarantees in order to protect its citizen’s privacy is insufficient. Legislation does not guarantee a fair balance. SyRI is insufficiently transparent and controllable. There is no information about the factual data: what factual data can lead to a justifiable conclusion of a justifiable risk? There is also too little insight in the kind of algorithm that is used to determine risks. It is impossible to control how the simple decision tree leads to an estimation and based on what steps used by the model. The court therefore agreed with the plaintiffs that it is therefore difficult for persons to defend themselves against a reported risk. It is also difficult for persons to check whether their data have been correctly used and processed when no risk is reported. The court furthermore agreed that the use of risk models can unintentionally lead to discrimination and stigmatization. SyRI has thus far only been used in so-called problem areas. That in itself does not have to mean that the use of SyRI is in all cases disproportionate or a violation of the ECHR. Given however the large amount of data that can be used for SyRI the risk exists that SyRI unintentionally leads to certain correlations, such as e.g. lower socioeconomic status or migration background, the court continued.

In order to fully grasp the context which made the introduction of a problematic system as SyRI possible, we draw upon insights provided by the panel discussion organized by Asser, where panelists Tijmen Wisman (Vrije Universiteit Amsterdam, Platform Bescherming Burgerrechten), Sanne Blauw (De Correspondent), Valery Gantchev (Rijksuniversiteit Groningen) and Sonja Bekker (Tilburg University) reflected upon the SyRI case judgment. SyRI is a technical system and from a technical perspective, it is problematic because, even though the state argues that it uses simple decision-trees as a model for risk profiling, it could use deep learning, which lack transparency. Even the simple decision-tree that is allegedly used by the government lacks transparency, Blauw explains. The impact on citizens of being exposed to such a system cannot be underestimated, as Bekker quoted one of the families living in an area targeted by SyRI, stating “[the government] doesn’t realize what the impact of SyRI is. My mother is panicking because she has a spare bed, because my grandmother comes to visit and sometimes she spends the night with us. Now my mother is scared because she thinks that the spare bed will used as an indication that we have an extra person in our household and that this will effect our benefits.” This undermines the fundament trust and solidarity within society, together with undermining meaningful interactions with the government.

Gantchev framed the introduction of SyRI within a social welfare conditionality framework, which refers to the tendency of making access to social welfare conditional of participation with the labor market (Watson, 2014). Where governments are increasingly emphasizing obligations for people who depend on social welfare, the government uses control mechanisms in order to enforce negative sanctions. SyRI is an example of such a control mechanism, which have a tendency to gradually intensify, Gantchev witnesses. This process is part of the shift towards from a welfare state to a repressive welfare state, where the state deprives people of their liberties in exchange for social security. According to Gantchev, governments show a tendency to neglect of minimize the rights of the beneficiaries of social benefits. Wisman complements these concerns by stating that the vulnerability of the people who are exposed to SyRI increases the problematic character of this system. People are left to their own devices to have their administration in order, something some people struggle with. SyRI targets these already vulnerable people, making them increasingly more vulnerable. This raises the question of what we want our society to look like and how we can use digital technologies in such a way as to achieve a desirable society. Bekker encourages us to stop looking at the digital as a goal in itself. What we need, are human interactions and empathy. Not everything should be digitized and processes that do become digital, must be as transparent as possible. “Argue well why you need digital tools and only then apply them. Instead of the way it goes now: we digitize things and hardly argue why”, Bekker states, concluding “sticks alone do not work. You need carrots as well. Human interactions work and you need tailor-made help”.

The SyRI case provides insights generalizable to challenges of the digital welfare state. Blauw argues in an opinion piece in The Correspondent that the wider problem with data is that it enforces preconceptions of governments and provides evidence for policies based on potentially biased perceptions. “If you only search in certain places, you will only find something in those places” Blauw writes. “Then you can point at the data and say – hey, you see, there is fraud in those kinds of neighbourhoods. Let’s look there more often. While other violations are left unchecked.”

But it doesn’t have to be like that. Alston acknowledges the potential of digital technologies such as artificial intelligence. Even though it has up until now primarily been used for the benefit of those people who are economically secure, it could and should be used as a means to improve well-being amongst vulnerable populations as well. Not only are vulnerable populations currently not benefiting from the potential of digital technologies, they are actively surveilled and punished by them. Alston emphasizes that the challenge for governments is to use the opportunities created by digital technologies in such a way that they “appropriate fiscal policies and incentives, regulatory initiatives, and a genuine commitment to designing the digital welfare state not as a Trojan Horse for neoliberal hostility towards welfare and regulation but as a way to ensure a decent standard of living for everyone in society”. This means governments ought to use digital technologies in such a way that it no longer punishes vulnerable populations in society, but that these people can benefit from it. SyRI has shown the challenges in doing so and provided us with some food for thought on how we will continue this quest for an inclusive society, supported by digital technologies.

[1] Nederlandse Juristen Comite voor de Mensenrechten, Stichting Platform Bescherming Burgerrechten, Stichting Privacy First, Stichting Koepel van DBC-Vrije Praktijken van psychotherapeuten en psychiaters, Landelijke Clientraad and two private persons, De Federatie Nederlandse Vakbeweging

About the project

Places and populations that were previously digitally invisible are now part of a ‘data revolution’ that is being hailed as a transformative tool for human and economic development. Yet this unprecedented expansion of the power to digitally monitor, sort, and intervene is not well connected to the idea of social justice, nor is there a clear concept of how broader access to the benefits of data technologies can be achieved without amplifying misrepresentation, discrimination, and power asymmetries.

We therefore need a new framework for data justice integrating data privacy, non-discrimination, and non-use of data technologies into the same framework as positive freedoms such as representation and access to data. This project will research the lived experience of data technologies in high- and low-income countries worldwide, seeking to understand people’s basic needs with regard to these technologies. We will also seek the perspectives of civil society organisations, technology companies, and policymakers.