Digital rights are human rights.

Interview with Nani Jansen Reventlow

by Olena Boytsun, Founder of the
Tech & Society Communication Group

Digital rights are human rights.

Interview with Nani Jansen Reventlow

by Olena Boytsun, Founder of the
Tech & Society Communication Group
Nani Jansen Rewentlow (Photo:Tetsuro Miyazaki)
Olena Boytsun, founder of the Tech & Society Communication Group, talks to Nani Jansen Reventlow, an award-winning human rights lawyer specialised in strategic litigation at the intersection of human rights, social justice and technology, about the concept of digital rights, regulations within the field and vigilant individuals.

Nani, thank you for taking time to talk today about such an important subject as digital rights. You are one of the professionals, who were building the field in the years, having founded and developed the Digital Rights Fund. What definition of digital rights do you consider the most accurate one nowadays?

— The traditional, old-fashioned way of looking at digital rights is to focus on privacy, data protection and freedom of expression online. The way that I, as well as we at Digital Freedom Fund, have looked at them, has a slightly broader definition. We've always looked at it as all human rights engaged in the digital context - so, not just civil and political, but also economic, social, and cultural rights. In a weird way, the pandemic has been helpful in explaining the definition a little bit more. I think that we all noticed how much we had to engage with technology to make so many areas of our lives run. And that illustrates how in almost every aspect of our lives we're engaging with technology in some form: at school, at work, our social interactions with each other, our banking, healthcare systems… Basically everything. So in a broader sense of the word, it's all human rights, and when technology is involved, basically we talk about digital rights.

It is quite often to think that there is a whole spectrum of human rights, and within this field there is a sector of digital rights. Do you mean that it makes more sense to structure digital rights as basic human rights that are just “touched” by or “delivered” through technology?

— It makes sense in looking at it in a more holistic way. It's the same way that it's helpful to sometimes emphasize the context in which rights are being engaged. There's the same set of human rights for everyone. But it can be practical sometimes to focus on the human rights of children or women, then we talk about children's and women's rights. It's about highlighting particularly specific needs. I don't only mean protection needs, but also needs for people to be able to enjoy and properly fulfill their human rights. Technology presents very specific challenges and potential harms, when it comes to our human rights. So it's good to be aware of the fact that, when you are deploying technology, you have to look at the full scale of human rights and how they might be affected by it.

Is it that technology is only bad for rights?

— As someone famous said: “Technology is neither good nor bad”. (Note: it was Melvin Kranzberg, known for his laws of technology) Technology itself isn't the issue. It's about how we, humans, engage with it. It's about where and how we choose to use it, and what kind of technology we create and want to use. Imagine any tool that you have in your kitchen, for example. You can use it for good or bad. You could make a delicious meal with it, or you could hurt yourself by accident, or another person, or your furniture, whatever. The same objects or pieces of technology could lead to really good things and to really bad things. And it's about thinking what are the things that we understand about the context, in which we're deploying something, what do we stand for how it works.

We have safety measures for all the different things that we use in daily life. If we need to have transportation to get somewhere, there are risks entailed, and we try to mitigate that. In a similar way, we have to think about technology. How do we make something that is safe, that doesn't create harm for people, perhaps creates benefits for our society, and how do we mitigate the risks, when it doesn’t? So, it's not the tech in and of itself, it's the whole context, it's us and the choices that we make.

Are there any specific groups that are the most vulnerable to digital rights violation?

— In the end, anyone could be affected. But I do think that those who are marginalized by our societies run the most immediate risk of harm. In particular, because of the two following points.

The first is the fact that the kind of technology we build and the way that we deploy it reflects the power structures that we have in our societies. Those who have power are often the ones who are also the most privileged. They are the ones who are making the policies and setting the rules. They are focused on their own interests. And there's often not enough attention for people who are differently placed in our societies, for marginalized groups for all sorts of different reasons: of ability, gender, religion, economic status etc. And we're not focusing enough on making sure that any harms inflicted on those groups are mitigated sufficiently. Because it's not part of the field of vision for leadership.

The other thing is that quite often those communities and groups are the testing ground for a lot of technology that we want to roll out. I'm not the one who framed that. Philip Alston, international law scholar and human rights practitioner, looked a lot at the impact of technology on the human rights of the poor and said that people who are economically disadvantaged are often the testing ground for new technology. You see it in other contexts as well, for example, with Europe using all sorts of really invasive technology to identify people, etc. That is all coming for all of us. It's there, it's being tested, it's being refined, and it's going to make it to all of our doorsteps really soon. So, this is why we have to keep in mind that what affects the most marginalized groups in our communities, that will directly affect all of us. It's just in an early stage. It's just not here yet, but it's coming.

Could you possibly give an example of some unified case of digital rights violation that would be appealing and understandable to anyone?

— My point is that it all affects all of us. We can take different things. For example, facial recognition technology that works best on white male faces, meaning that it doesn't work well on the rest of our society. Based on that technology assumptions are being made as to whether or not someone has committed a crime or whether or not someone was in a certain area at a certain time. It can also relate to innocent things like using facial recognition software to open a door or to be able to access certain software. We've seen it a lot during the pandemic, when the software was used to check whether or not students might be cheating during their exams that they had to take from home. So, yes, that affects all of us, perhaps except for a small group of people, which is the white able-bodied cisgender developers. So the “standard user”, which is not standard at all.

I can give one more example of predictive software usage that was incorporated in the law enforcement system, such as predictive police, but you also see it in the private sphere more and more. Whether or not you can get a bank loan is becoming increasingly dependent on all sorts of factors that you do not know in advance. There was an interesting case recently, nicely illustrative about how this sort of thing harms all of us. A Swedish-speaking man in another Scandinavian country, who was living in a rural area, wanted to get a loan to do something with his house. But his application was rejected because Swedish-speaking people were a minority in that particular country as well as women were rated higher for receiving money. So he was a man from a minority, therefore he was downgraded. Had he been, one can say, a Finnish-speaking woman who had asked for a loan, he would have gotten it. So there are all of these ways in which the systems at play are not transparent to us, but they are making impactful decisions about our lives. So, yes, it affects us all both in the public and the private sphere.

— Are there any global regulations in the digital rights sphere?

— Global regulation – no, it doesn’t exist. And I also would like to emphasize that we have the human rights framework to regulate these things.
    When you look at it through a human rights lens, we have beautiful international frameworks that just need to be also applied in the technology related circumstances.
There are, of course, regional regulations, for example, in the EU it is strong in terms of data protection rights. The GDPR (General Data Protection Regulation) gives individuals quite a lot of tools in hand to safeguard their own data rights, and you can act as individuals, as a collective, as a group, and so on. And there are regulations in the making, such as on AI (Artificial Intelligence) as well as regulations that should give individuals more powers to safeguard their rights, when it comes to big platforms, such as the Digital Markets Act and the Digital Services Act.

I think though that a lot of this is tricky because it still puts a lot of onus on the individual to pick up the files and do the work. Sometimes the cases can be handed over to civil society and an NGO could act on behalf of individuals. But it still puts a lot of strain on individuals to first understand what is going on, and then also figure out how they can act on it. However, it's good that these frameworks are there, because that will help sharpen the regulation and also help increase the protection of our human rights in this context.

There is no global “digital rights convention” or anything like that, and I am sometimes asked whether it should exist at all. My answer is no, I don't think so. I think that we have a human rights framework that was solidified right after World War II. So it's quite old, it's been around for a long time, but it's remarkable how adaptable it has been already to different contexts and developments over the decades.

Nani Jansen Rewentlow (Photo:Tetsuro Miyazaki)

And I think it can also keep in pace with what is happening now.
I am much more a fan of testing the frameworks that we have to see if there are any gaps related to digital rights and then looking at how we want to address those. But judges, courts, regulators - they are used to adapting older frameworks to newer situations and newer scenarios. So the questions are more on how to make sure that people understand what are the problems, where they should be queried, who could be held accountable for what types of harm.

It looks like you are a proponent of a more evolutionary process for developing such frameworks. If you were in charge of digital rights on the whole planet and could easily enforce the ideal legislation, it seems that you wouldn't change a lot in the system.

— I think that the regulatory framework isn't the problem, but giving it teeth in practice is. That is where it has to go: making sure that people have meaningful access to justice, so that they can go to court and safeguard their rights. The other thing is that the courts know what to do with those queries, so that they understand the issues, they understand the tech, and they understand what our responsibility is on human-made things, and understand how to agree on that. And then, of course, regulators and legislators also understand it, so that they can respond to those pointers that they're going to be getting from law enforcement. I think it is more about making sure that we make the most of what we have. I am sure there might be gaps and things that we need to fix, but those are going to prove to be relatively little overall.

If people feel that their digital rights are violated in some situation, what steps should they take?

— It depends on the context. I think you should ask yourself a similar question as you would in an analog context. For example, if it is in the context of work and employment, then it is probably a labor law issue. And the next steps depend on whether or not you have a union that can assist you with that,or if this is something that you have to get legal counsel for as an individual. A good rule of thumb is asking yourself the same questions as you would if technology were not involved. Because all technology related situations are still part of the same spectrum of rights.

For a digital rights case, one should search for a general specialist, who will pick up the details in the process, or should it be a professional, who specializes on technology related cases?

— We all have to learn. It can be the first time for a labor lawyer to represent clients, who were fired because an algorithm said that they weren't doing their job well. They are just certain things, on which you're not going to necessarily find people who've done it before. I think a lot of it is also about making sure that you find someone who's not afraid to ask questions and find people elsewhere, who might have dealt with a similar situation before, or ask the support from NGOs or other experts, who might be able to assist in better understanding what is going on. I did a project “Catalysts for Collaboration” a couple of years ago at the Berkman Klein Center at Harvard University, which looked at collaboration around digital rights cases between lawyers, academics, technical experts, and activists. What I found was interesting. In the beginning, I did a lot of interviews with people to find out if lawyers are working with technologists a lot or not. And quite often it hadn't occurred to lawyers as an obvious thing. They were like: yeah, okay, I could have done that actually. But it's also about for each of us to acknowledge that we have expertise in certain areas, for example, in knowing how to frame the legal argument, but we might need some support in understanding the technology and getting the court or whoever we're engaging with to understand it and how these things fit together. I'm a big fan of cross-disciplinary collaboration, because there is mutual learning there. And in the end, it can lift all the work to a much higher level.

If you personally need some advice on technology related aspects in your work, what do you do? Do you talk to someone, do you do your own research or maybe you did any specific training?

— I haven't done specific training, but we started with some trainings at the Digital Freedom Fund for lawyers to better understand the technology, for example, on content moderation etc. I usually start by reading a bit and seeing where I get. And if there's anything even more specific I try to find, if there's anyone in my network who might be able to answer some questions that I have. One of the nice things these days is that you find so much online already. Usually, whatever you look for, someone will have written a blog post about it or given a TED talk. You can usually find quite a lot.

If a legal professional would like to specialise in the field of digital rights, should this person have any specific preparation for it? Or maybe there is any licensing going on?

— If you're talking about practicing attorneys, for example, it might be worthwhile seeing if there are any professional education trainings. I know the context of the Netherlands, where I am practicing. You have to get the obligation to continue your professional development. You have to do several courses each year, and there are very targeted ones that are focused on specific areas of law. So, it's worthwhile exploring if there's anything on offer like that. Then there is also an amount of self-learning that you can do obviously by following what some of the digital rights organizations are writing about. There are a lot of great explainers out there that show how mass surveillance works, how online content moderation works, and things like that. There are plenty of trainings that are organized from time to time by the Rule of Law Initiative from the American Bar Association. There are different NGOs that focus on exactly having these conversations amongst practitioners. Keeping an eye out for that is a good idea as well.

The digital rights ecosystem is only being built now. Businesses come to digitalisation much faster than governments. Is there any disproportion in such processes, when the development of new businesses goes without proper regulations?

— This is a good question. I still stand with the answer about the need to make sure that current frameworks are tested on new scenarios. There's just no way to be able to regulate and legislate in real-time. That is the nature of the way that we set up these processes with sufficient checks and balances. We need to make sure that we don't have too much of a democratic deficit, whether everything goes right with the laws that we adopt. But it's important to remain vigilant, act quickly and test new developments to these existing frameworks as soon as possible. Sometimes we have to be unafraid to say: wait, we don't quite know what the impact of this is. So we're just going to pause this for a moment and we're going to investigate this now.
It is important that in all of these processes we start coming to a routine, to a process that mandates regular human rights, conducts assessments of what is going on, and all technology related cases become a standard practice.
I quite often think of how procurement works. It is probably the least attractive topic in law school that you can think of. But it's important, because it makes it transparent why someone is allowed to build a certain thing for the common good that we are all going to be using as a society. This should apply to the technology that we use, that we buy and that we roll out just as much as to any other product. Right now there is too much difference between companies saying: oh, these are trade secrets,

Nani Jansen Rewentlow (Photo:OII)

we can't share with you how it works, if governments ask for details on how some software works. You don't have to make it all public, but someone must be able to scrutinize exactly what is happening inside your system. At some point such a demanding position will change the behavior of the whole industry. There is a lot that can be done to shift that balance in a direction that will in the end be a lot better for all of us.

Who should be the driver for such changes? Citizens, civil society, government?

— There are different roles for everyone. It is a civil society's job to be the watchdog to push for this behavior and to push the government to take such stands etc. It's also civil society's job quite often to educate citizens to understand what is at stake. As individuals we can be a lot more vigilant. We should also take the responsibility of exercising power where we have it. For example, if you are volunteering somewhere at a sports club, and someone suggests placing cameras all over the place, you have to force the conversation as to who's going to get access for the data, what are cameras actually going to do, where is that data going to be stored? And not be afraid to just make a fuss from time to time, because that is necessary for all of us. These conversations start quite often in a small setting. It's not just about what happens in the capitals, what lawmakers are doing, what policymakers are doing. It's also individual dispositions that we make in smaller contexts.

You have built a powerful player in the digital rights field, Digital Freedom Fund. And now you have founded a new initiative that actually deals with all kinds of human rights violations, not only related to technology. What was your motivation for it? Is it a completely new beginning or is it still connected to the work that Digital Freedom Fund is doing?

— The initiative “Systemic justice” will look at racial, social, and economic justice through an intersectional lens. It's particularly focusing on how a person's identity compounds harm, when they experience human rights violations. And I see it as a continuation of the view that we have always practiced at DFF about digital rights and human rights, looking at all human rights, and also, in particular, at violations of digital rights as a reflection of bigger power structures in our societies.
Look at the harm that is inflicted on specific communities because of the use of predictive policing tools. You cannot focus only on the use of this software if you don't account for where it comes from, which is a discrimination like institutional racism in law enforcement. One is a symptom of the other. And, I guess, the reason why we're taking a slightly broader lens is to make sure that we don't end up only fighting symptoms, so only focusing on the technological tip of the iceberg rather than dealing with the iceberg that's underneath it. It is sustaining the use of that technology, it is inspiring its development, it is driving its implementation, and also holding in place the negative impact that it causes for certain groups. We are looking a bit further now, on how we can shift what causes us to use things that we know are going to be oppressive to certain people in our societies, and how we can change those bigger issues.

At our discussions within the Tech & Society group, we come to this conclusion again and again that it is still people who develop technology, and it is important what kind of ethical principles and values they have. My last question would be on what would make you happy, what would you like to achieve with the Systemic Justice initiative?

— What would make me happy is that we really start shifting some of these power imbalances that we're currently seeing. To put it simply, we are trying to make this fiction - that the law is equal to all, that justice is accessible to all - a reality. Because right now it is not. We want to level access to justice, so everyone can make use of these systems and fight for their rights and their causes.

Prepared in February 2022, published in February 2023
References
1.   Personal website of Nani Jansen Reventlow https://www.nanijansen.org/
2.   Systemic Justice – https://systemicjustice.ngo/
3.   Digital Freedom Fund – https://digitalfreedomfund.org/
4.   Project «Catalysts for Collaboration» – https://catalystsforcollaboration.org/
5.   Statement on Visit to the USA, by Professor Philip Alston, United Nations Special Rapporteur on extreme poverty and human rights – https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=22533
6.   General Data Protection Regulation (GDPR) – https://gdpr-info.eu/
7.   Europe fit for the Digital Age: Commission proposes new rules for digital platforms – https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2347
8.   The Digital Services Act: ensuring a safe and accountable online environment – https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-services-act-ensuring-safe-and-accountable-online-environment_en
9.    The Digital Markets Act: ensuring fair and open digital markets – https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en
10. Kranzberg, Melvin (July 1986). «Technology and History: «Kranzberg's Laws» – https://www.jstor.org/stable/3105385
11. American Bar Association Rule of Law Initiative – https://www.americanbar.org/en/
© 2021-2023 Tech&Society Communication group. All rights reserved.