In this dossier we’ve highlighted the risks and focused on the scary side of facial recognition technologies, but it’s also interesting to shed a light on their possibilities. These technologies are always sold to us with the promise that they will make our lives better, but often the disadvantages outweigh the benefits. Facebook wants to scan our faces under the guise that they will use that information to prevent photos of us circulating on the platform without us knowing. You might wonder if that risk is worth giving your face away to a tech giant like Facebook. To make it possible to find missing persons with the help of facial recognition technology, we always have to be followed by CCTV cameras on the street. Can the technology bring something good to humanity, without taking our freedoms away?
Multidisciplinary researcher and artist Emily West may have an answer. She is currently trialling a facial recognition tool that recognises pain in patients that are unable to verbalise their pain themselves. In her case, those patients are people with dementia. The tool Emily and her team uses, Painchek, is already in use by several Australian elderly care institutions. It uses the smartphone camera to record a short video, and analyses facial muscle movements that indicate pain. Research shows that the tool scores high in accuracy, sensitivity, and specificity. In Emily’s research project, Painchek is used to look at the pain and discomfort by people that are hospitalised for dementia. They are looking at the “noise” that the hospital setting brings, and seeing if that affects how Painchek works. To better understand the potential use of facial recognition technology in healthcare, I asked Emily some further questions.
Lilian Stolk: Why is facial recognition technology so suitable to research a possible link between pain and delirium?
Emily West: There has always been a drive to be able to assess pain in a somewhat objective way, but many scales that we currently use rely on self-report. When your patient can’t engage with this then things become very difficult, and pain has such a knock-on effect on the patient’s comfort, mood, even the extent of their delirium. So we want to be able to assess and address that pain, but to do so accurately you need an idea of what and where that pain is, and the intensity.
This isn’t a situation where you can assess the patient’s pain by asking “is your arm still painful today?”. Delirious patients often don’t know where they are, what time it is or what’s happening to them. This can then manifest in agitation and restlessness, emotional volatility, being very sleepy and checked out, paranoia and hallucinations etc. But underneath all of this, they still might have pain that needs to be assessed and treated.
Painchek uses something called Facial Action Coding, which looks at physical facial movements that are associated with different emotions or experiences. A person performing facial action coding needs hundreds of hours of training, and extended time with a person to read these movements. The type of facial recognition that Painchek uses automates this process, and allows the face to be “read” to detect these signs of pain, even when the patient in front of us can’t tell us about them themselves.
LS: The Painchek app is installed on smartphones, but I can imagine that most patients dealing with dementia don’t have a smartphone. Can you explain how the integration of this app in healthcare would work?
EW: The tool isn’t designed for self-assessment so this really isn’t an issue – though I think you’re also underestimating the amount of older adults who have smartphones! The tool can be used whenever the carer feels that pain needs to be assessed, and is logged with time and setting within the app. So it’s as easy as that really, you feel like the patient you’re looking after could do with a pain assessment so you take out your smartphone and you do the three second recording, and add additional data in as you need to.
LS: Psychologist Paul Ekman concluded that there are 6 universal facial expressions: sad, angry, surprise, disgust and pleasure. But his research is also highly debated. Is pain another universal facial expression? Does everyone look the same when in pain?
EW: It’s not that facial expressions for pain are universal, it’s that there are commonalities between them. So the tool and the logic behind it aren’t saying “this is what everyone looks like when they are in pain”, it’s more “there are movements and signs often associated with pain, so we’re looking for them”. So looking for those signs, say a furrowed brow, downturned mouth etc. and looking at other data that we know are associated with pain like behavioural changes can help us to build a comprehensive picture of whether a person might be in pain.
LS: We smile when we’re happy, but we don’t smile all day long. It works similarly for pain. When you’re in pain, that’s not consistently visible in your facial expression. How do you deal with that with the Painchek?
EW: It’s a pretty similar logic—if you were happy all day, then chances are that you’d smile at different points throughout the day, so checking in with you a few times would capture that. It’s the same with Painchek, by checking in with and assessing patients throughout the day (as already happens in healthcare) we pick up pain. You can then also more specifically look to whether the pain is associated with certain things—is it after moving, after eating, at night—by checking in more specifically after or during those times.
LS: Painchek doesn’t only measure if there’s pain, but also measures the intensity of the pain. Can you explain how that works?
EW: Absolutely. Most pain scales measure intensity to an extent, and Painchek is no different. It looks at the different dimensions of pain assessment—the facial movements, behavioural changes, noises etc. and assigns a severity value to these. That way, someone looking at the data who hadn’t seen the patient would know whether they were just a bit withdrawn and frowning or screaming and lashing out and inconsolable.
LS: You use facial recognition technologies for pain check with patients who cannot verbalise their pain themselves. How could you then verify the outcome?
EW: We look at how outcomes that are picked up on Painchek compare against outcomes that are picked up on other pain scales that have been used and validated in this population already. So if we know that an analogue pain scale gives us an accurate idea of when a person with dementia is in a lot of pain, we’d be looking for Painchek to give us the same kind of output, and not something wildly different.
I think it’s also important to note that having dementia doesn’t mean that you can’t communicate your own pain at all. It’s a disease that exists on a spectrum, and of course people have good and bad days when they might be more or less lucid. So it’s especially important to not discount self-reported pain just because a person has dementia, or delirium or anything else really. It’s our job to take that as a starting point and to then look into it.
LS: Can you tell a bit more about the results of the research so far?
EW: We’re not at that point yet—when COVID hit we were about halfway through running the study in hospital wards. Obviously we’ve taken a few months off that now, but we’ve been working on other parts of the project virtually, so we’ll be able to hit the ground running as soon as this lockdown is lifted.
LS: Did you run into any forms of bias from the facial recognition technology?
EW: Not bias per-se, but it was really interesting to find that the app sometimes had trouble picking up patient’s faces in the low light levels that we had—I guess that the gloominess of a winter day in the UK is something that’s hard to plan for when you’re designing an app in Australia! We also occasionally had trouble picking up very pale people when they were lying on white pillows. That’s something quite specific to the hospital setting, in care homes people often have their own bedding or they’re up and about and sitting in chairs more. So there were a few of those unintended consequences, but we work with incredibly diverse populations and I haven’t come across any wider biases.
LS: Do you think that facial recognition technology will be applied more often in health care? If so, what options do you foresee?
EW: I think it certainly has a future. There’s a lot of things that healthcare providers have to interpret in the clinical setting, and if we can find tools to make that a bit more objective then that would be invaluable. But of course this comes with great responsibility, we can’t come to use tools as a crutch without applying human intelligence and nuance to them. But that’s the struggle for all new technological developments really—working to find a balance.
I’d personally love to see more ambient monitoring. When we have the capacity for small devices to handle masses of information—vital stats, hydration etc. to have that running and flagging problems when they arise, I think that will be a real leap forwards. These could also provide background or baseline information in consults, and really help out medical professionals if they’re used well.
LS: Should there be any limits to this, in your opinion?
EW: It’s so hard to look at the ethics around technology in any meaningful way until people start living with it. I’m always interested in the laws of unintended consequences, how the best-made-plans truly exist in the world. The ways that humans “misuse” and adapt technologies. I want to see technologies and roll-out models that are reflexive to these changes and adaptations rather than trying to set hard limits for hypotheticals in advance. I think it’s a question for the public at large to answer, and for scientists and designers to adapt to.