Dossier 10: Screen New Deal

The Googlization of Health: Privacy in the age of Big Tech

In October 2020, as a way to combat the spread of Covid-19, the Dutch government launched the contact tracing app, CoronaMelder. One of the main concerns when contact tracing apps were introduced to the wider public was whether they would ensure our privacy enough, and how they would influence our day to day movement in society. I interviewed Marjolein Lanzing, an assistant professor Philosophy of Technology at the University of Amsterdam. She did her PhD on self tracking technologies, privacy, surveillance, and autonomy concerns, and has been researching the Googlization of health at Radboud University.

Guus Hoeberechts: At the beginning of the pandemic, we were all scared these contact tracing apps would threaten our privacy. We saw what happened in China, where the app was mandatory and temperature taking robots were roaming the streets. Did our fears come true?

Marjolein Lanzing: There were many people who painted these sort of dystopian authoritarian futures, and I don’t think those fears came true, and it’s not the case with contact tracing apps in the Netherlands either. Of course it was legitimate to have privacy concerns before these apps were actually developed. It’s also a fair starting point, especially because they were dealing with very sensitive data about people’s health. But fast forwarding to contact tracing apps now, there are arguments to say that they can be seen as rather privacy friendly. The data is well encrypted, and it is actually very difficult for others to access it. So in a very narrow sense of privacy, I think we can put most of our worries to rest, but not all of them.

This is one of the key debates around contact tracing apps, and also in privacy and surveillance studies, and more broadly in philosophy of technology and science and technology studies. Privacy, of course, is one of the key words and key concerns when we are critical of new technologies, or when we want new technology to be developed in an ethical way. The value of privacy is something that, fortunately, many of us now are acquainted with, but it took a very long time for people to create this awareness. Thanks to lots of new legislation—such as the GDPR, and big scandals concerning, for instance, the NSA mass surveillance and the WIV (Wet Inlichtingen en Veiligheidsdienst) in the Netherlands about bulk data collection—I think a lot of people are now aware of this concept of privacy as a value, and it’s often used as the tool to critique new technology with. But there are lots of other values that we should take into account when we develop new technologies. What lots of people have argued at the very beginning of the pandemic, most famously Tamar Sharon (Radboud University), was that maybe privacy is not really the issue here. Maybe the thing that we should be most worried about is the fact that these technologies are facilitated by Big Tech companies, and it might be the topic of monopoly or domination, rather than a question about privacy. As an example, she gives these contract tracing apps and explains that in the pandemic, Google and Apple managed to facilitate this technological infrastructure, which basically meets all of our worries: our data is well encrypted and the storage is decentralised. So there are many reasons to think that the technology is very well developed according to privacy by design, features, and so on. So if privacy is taken care of—and I still think that’s a very narrow conception of privacy—then maybe we shouldn’t be worried at all. 

But what we should be worried about has to do with the larger trend of all these Big Tech companies moving into our social domains, and restructuring them because they have the expertise and the power and the resources to do so. One big trend that you see is that Amazon, Facebook, Apple, Google, everybody’s moving into the health domain and developing products and technological infrastructures. In a crisis, like the pandemic, we become dependent on Big Tech companies.

WIV advisory referendum 2017: Dutch citizens could vote on whether their intelligence and security agencies should gain more sweeping surveillance power. The sweeping changes were rejected, leading to the law being slightly adjusted.

GH: You’ve said that more and more people are becoming aware of the value of privacy, but there are still many that don’t care and say they have nothing to hide. Why aren’t they concerned, and why should they be?

ML: Because of privilege. Some refuse to care about their privacy because they’re defeatists. They believe they don’t have any power anymore to resist technologies that collect their data—they’re resigning. I’m concerned mostly with the people who actually think that they have nothing to hide. My first argument would be a more social argument, that’s important in the context of privacy because we tend to think of it often as a mere individual right. And, of course, privacy is a human right, but it’s not the only dimension. Privacy largely is a social concept, with many different social dimensions. So when you say I have nothing to hide, it’s not only about you, but about other people. I would like to hypothesise that people who say I have nothing to hide are highly privileged people, that aren’t used to being interrogated, or arrested on the streets, or asked for their identification, or monitored online. There are lots of people that deal with this every day, who are less privileged than those who say they have nothing to hide. Not because they’ve done something wrong, but because they fall outside the scope of what is considered the norm. Privacy is not just about what you think you have to hide, but it’s about relationships we have with other people. As a society, we need to protect a diversity of people and make sure that everybody enjoys the same types of rights and freedoms. So saying ‘I have nothing to hide’ simply doesn’t make sense.

But also, strictly speaking, who has nothing to hide? Anybody who says I have nothing to hide doesn’t have a social life, I’m sure. You don’t share the same things with your parents as you do with your partner. The things you share with your children might be different from those you share with your employer. You might not want the state to know about the things you share with your doctor. There are very good reasons to have boundaries in your life with regards to who has access to what, and the fact that you’ve never been confronted with any harm because of it, is not a reason to be sceptical of the value of privacy. People take their freedom for granted, when it’s something we need to protect. Privacy is a very important aspect of that. So the reason why we value privacy is not for the sake of privacy. That’s not really the point. It’s about the fact that we value something else: autonomy, a free society, a society of equals, where people experience the same rights. To make it broader, that also means that it’s important for democracy because a flourishing democracy is served by a plurality of people who all have their own conceptions of a good life, and are able to pursue those (provided they don’t harm other people). Even when you don’t agree, it’s good to have all these ideas present in society, and to let them enter into debate with each other. This is something that privacy facilitates. Again, not only for yourself, but for public bodies in society where people as a group can discuss what is meaningful to them and important to them. A short introduction to the social value of privacy, but for the people who have nothing to hide, I don’t think they realise that they’re part of a bigger whole.

GH: Contact tracing and ‘corona passport’ apps are becoming commonplace in our lives. Even if they are privacy friendly, do you think this will shift what we consider ‘normal’ when it comes to sharing our private health data and information?

ML: One of the worries of the government, warned by the ethical advisory committee, was that people would get used to contact tracing apps as a surveillance tool. So this issue is on the radar. Technology like this is rolled out on a huge scale. As of now, it’s voluntary to use the app, but the infrastructure is there. And the thing with infrastructures is that they could be abused at some point (as far as I know that’s currently not the case). It’s a point that Evelyn Austin made in the Volkskrant last year, as she argued about the surveillance structures. As soon as they’re rolled out, they will stay indefinitely and you have to be careful that they don’t get used for other purposes. She warns that the law doesn’t protect people against this yet. Lots of new technologies get introduced during a crisis, and people accept them because they want to contribute to the public good, such as public health now. But you have to make sure that there’s also an end date to it, that it serves a particular purpose and then gets demolished again when the crisis is over. And that feels weird. You’ve built an infrastructure, an investment, and then you have to get rid of it again. But you want to avoid that the infrastructure gets used for other purposes. During a crisis, sometimes laws are amended or changed in order to make something possible. New policies are made to embed a technology into, to make sure that it’s used for the right purposes in a given time. In order to protect people, you also have to rely on laws that protect against infringement or alternative uses of infrastructures like this. Because regimes can change, people can change. Who you might trust today, you might not trust tomorrow, or they might be replaced.

GH: Many governments used the Google/Apple API to develop their corona app. Our minister of Health and Welfare decided last year that the Dutch app wouldn’t use it, but has come back on that decision. Can you tell me the difference between those apps that used the Google/Apple API and those that were developed by countries themselves? 

ML: Most European countries did. The only country I know of that resisted was France. They had this debate about the dependency on Big Tech. They were worried that their digital sovereignty would be undermined if they would use an API facilitated by Google and Apple, so they wanted to host it themselves. That unfortunately also means that they are struggling with privacy issues, because they chose a centralised storage system. That means their data is less protected, ironically, than with the contact tracing apps that used the Google/Apple API. England had a similar debate about this, and eventually they chose to use it because they thought they would be breaking less privacy laws by using that particular API than if they would make their own. I wrote an article about this a while ago, all the differences between the contact tracing apps in Europe under different policies, together with Lotje Siffels and Elisa Lievevrouw. It’s interesting to see the debates that arose around the choices, and how most European countries decided it would be more in line with the GDPR to accept the Google/Apple API than develop their own in the end. So that’s the price of privacy. It depends on what you’re more concerned about, privacy or non-domination. There’s something to be said for both approaches.

GH: How do you feel about this data being in the hands of private companies rather than public institutions?

ML: The companies control the infrastructure, but to what extent they actually have access to our data is questionable. Hypothetically, if they would have access, I would be really, really worried. Not only with Big Tech, but also the government. This amount of data can create a huge power asymmetry between government and citizens, or between Big Tech and citizens. In the past we’ve seen what Big Tech can do with personal data, how it can be used for alternative purposes, how it can be commodified. That could definitely be a problem. On the other hand, it would be worrying if governments had access to this, because health data is so sensitive, and it tells you so much about what people do, who they are, where they go. It also enables you to cut off options. It’s about who is in power and how much power they can have that makes the problem this big. If the government is able to allow you to go places or deny you services based on your data, that’s a problem. With Big Tech the problem becomes similar. They’ve become so big, they can decide a lot of things even governments cannot decide for you, like the types of services that you get offered or the type of access you have to online systems. Maybe they mean well, maybe they are benign, and maybe they will only do good things with your data, but we just don’t know. This type of surveillance could bring such interesting data, it would make you vulnerable to many problematic forms of interference and manipulation and discrimination. But I’m not sure that Big Tech has access to this data as of now. If they did, that would be hugely problematic.

GH: Big Tech companies have been slowly creeping into health and political spheres for decades, has the pandemic given it a surge it would otherwise not have gotten? And what does this mean for the future of these apps and these companies within our society?

ML: Yes, definitely, they grew exponentially huge. Everybody and everything had to be online. This is what the Screen New Deal looks like. These companies must have made so much money out of the pandemic. The amount of data that must have gone through these companies, it must have been huge. We’re talking about contact tracing apps, but all the technologies that we’ve used to stay in touch and to have contact and to track the virus came from them. Naomi Klein and Evgeny Morozov have written about this extensively. Klein argues that this is the crisis situation that has opened up the doors wide for Big Tech. So we had to need them to organise our lives, which basically was the case. We could have done without, but it would have looked completely different and much more chaotic. Lots of things were still possible because these tech companies were there. And they seem super sympathetic, because they offer us free stuff or they offer to help out or they offer to fix our cities, our computers, our lives, everything. When I saw the introduction of the Metaverse by Mark Zuckerberg, I had to laugh. At some point, he says: “You know, I really hated that during Corona, I couldn’t see my my colleagues. But luckily, there’s also some advantage to seeing each other online.”, and then he presents this Metaverse, which is supposed to supplant every form of social life and social interaction we have. I thought it was tragically hilarious. People have seen what they can do with technology. And it was very convenient and useful, but this has put a huge foot in the door for Big Tech. It’s not even a foot anymore, it’s a whole Big Tech person, through the door. Considering the fact that lots of European legislative proposals are aimed at curtailing the power of Big Tech companies, those are at odds with each other. You need them, and at the same time, you don’t want them to become too powerful. We have to make sure that we still have democratic procedures about who we choose to be dependent on. What do we want our social domains to look like? Who do we want to structure them? Preferably in many domains, we’d like to structure them ourselves, or at least have a say in it. If everything goes through these Big Tech companies, these opportunities diminish by the second. For democratic decision making it’s really important that we don’t just accept these things as they come, but to see what it means for us as citizens, what we want our communities to look like, who we want to shape these domains. It has such a huge effect, it would be wise if we could take that back a little bit. Or, actually, a lot. And we would do that as a community, not as the consumers. Or, as Shoshana Zuboff says, the abandoned carcasses.

Meta introduction: The future is beyond anything we can imagine