The days of walking the streets anonymously are over. Chances are that our faces are being tracked while doing groceries, visiting our parents by train, or supporting our favourite soccer club. Facial recognition software is increasingly integrated into our daily lives. Often without us even knowing.
Facial recognition systems are a form of biometric technology, which involve a process of making data out of the characteristics of the human body. The information derived from the analysis of your facial features can be used in different ways: it can be verified, identified or classified. The technology verifies your face during a passport check when it measures if your face in real life matches the face in your passport. A face is identified when law enforcement companies check whether your face matches a face in their database during a police stop. And the AI can classify your face into categories like ‘young’, ‘woman’ or ‘attractive’. This last feature helps TikTok to avoid serving too many videos of what they classify as ‘ugly people’.
Imagine you’re in a bar. Instead of a stranger coming up to you and asking for your name or number, which you can always refuse, this person secretly takes a picture of you. With the help of an enormous face search engine, this stranger can figure out your name and immediately see all your photos that are available online—including those old Hyves photos, or (for the teenagers among us) that baby photo your mother posted on her Facebook profile right after you were born. This dystopian, Black Mirror-like scenario, is not far from reality. In the beginning of 2020 The New York Times reported about a Silicon Valley company called Clearview AI that has created a face database just like this, with content scraped from social media platforms, and is selling it to law enforcement agencies around the world, including the Netherlands, and private organisations. Some months and much criticism later, the company has stated that it has stopped selling information to private organisations. But the fact that there’s a company, a commercial company, that has this enormous amount of private data is very scary.
Facial recognition technology is increasingly being used, and not just to unlock our phones or create the latest face filters on Instagram. It’s being used by governments and organizations that make life-changing decisions, such as the police, while the reliability of the software is still lacking. In fact, most facial recognition algorithms have a racial bias: black and asian people are more likely to be misidentified. An MIT study of three commercial gender-recognition systems found they had errors rates of up to 34% for dark-skinned women—a rate nearly 49 times that for white men. Facebook’s image recognition software is able to properly identify 85.4% of the 1 billion images, but that means it will still go wrong millions of times.
In this dossier, we explore what the increase in the use of facial recognition systems means for our society. Lilian Stolk wonders to what extent AR filters will protect our face against facial recognition technology. Margarita Osipian asks Controle Alt Delete about the role this technology plays in the hands of the Dutch police. We’ve invited designer Noam Youngrak Son to reflect on how AR filters can be used as a way to express yourself and create a multiplicious identity. Multidisciplinary researcher Emily West highlights the potential of facial recognition technology in patients with dementia. Want to protect your face? Check out our overview of the most creative ways to block the technology.
The featured image is made with the tool Imagenet Roulette, check out this website for more information.