Skip to main contentSkip to navigationSkip to navigation
face of a young womans on an iphone face id screen in a home office environment
Face ID on the iPhone: impressive, but what else can the tech be used for? Photograph: Prostock-Studio/Alamy
Face ID on the iPhone: impressive, but what else can the tech be used for? Photograph: Prostock-Studio/Alamy

Facial recognition is big tech’s latest toxic ‘gateway’ app

This article is more than 5 years old
John Naughton

We test and control drugs, so why do we freely allow the spread of potentially harmful products by unregulated entrepreneurs?

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera. There, it’s regarded as universally beneficial. If you’ve ever come across a suggestion on Facebook to tag a face with a suggested individual’s name, for example, then you’ve encountered the technology. And it’s come on in leaps and bounds as cameras, sensors and machine-learning software have improved and as the supply of training data (images from social media) has multiplied. We’ve now reached the point where it’s possible to capture images of people’s faces and identify them in real time. Which is the thing that really worries Stark.

Why? Basically because facial-recognition technologies “have insurmountable flaws in the ways they schematise human faces” – particularly in that they reinforce discredited categorisations around race and gender. In the light of these flaws, Stark argues, the risks of the technologies vastly outweigh the benefits in a way that is reminiscent of hazardous nuclear materials. “Facial recognition,” he says, “simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes.”

There are two levels of concern here, one immediate and the other longer-term but perhaps more fundamental. The short-term issue is that the technology is currently only good at recognising some kinds of faces – mostly those with white complexions – and has difficulty with people of colour. Whether this is “insurmountable” (as Stark maintains) remains to be seen, but it’s alarming enough already because it provides a means of “racialising” societies using the charisma of science. The longer-term worry is that if this technology becomes normalised then in the end it will be everywhere; all human beings will essentially be machine-identifiable wherever they go. At that point corporations and governments will have a powerful tool for sorting and categorising populations. And at the moment we seem to have no way of controlling the development of such tools.

To appreciate the depths of our plight with this stuff, imagine if the pharmaceutical industry were allowed to operate like the tech companies currently do. Day after day in their laboratories, researchers would cook up amazingly powerful, interesting and potentially lucrative new drugs which they could then launch on an unsuspecting public without any obligation to demonstrate their efficacy or safety. Yet this is exactly what has been happening in tech companies for the past two decades – all kinds of “cool”, engagement-boosting and sometimes addictive services have been cooked up and launched with no obligation to assess their costs and benefits to society. In that sense one could think of Facebook Live, say, as the digital analogue of thalidomide – useful for some purposes and toxic for others. Facebook Live turned out to be useful for a mass killer to broadcast his atrocity; thalidomide was marketed over the counter in Europe as a mild sleeping pill but ultimately caused the birth of thousands of deformed children, and untold anguish.

In the end, we will need some kind of control regime for what the tech companies produce – a kind of Federal Apps Administration, perhaps. But we’re nowhere near that at the moment. Instead (to continue the pharma metaphor) we’re in the pre-pharmaceutical era of snake oil and patent medicines launched on a gullible public by unregulated and unscrupulous entrepreneurs. And as far as facial recognition is concerned, we are seeing services that effectively function as gateway drugs to normalise the technology. FaceApp, for example, used to offer a “hot” filter that lightened the skin colour of black users to make them look more “European”, but had to abandon it after widespread protests. It still offers “Black”, “Indian” and “Asian” filters, though. And, interestingly, Apple’s latest range of iPhones offers FaceID – which uses facial-recognition software to let the device identify its owner and enhance its “security”. The subliminal message of all this stuff, of course, is clear. It says that facial recognition is the wave of the future and there’s nothing to worry our silly little heads about. Which is where Stark’s plutonium metaphor breaks down. Nobody ever pretended that that wasn’t dangerous.

What I’m reading

The politics of laughter
A comedian for president? William Davies has written a perceptive essay on the subject of politics and comedy for the openDemocracy website.

Now you see it…
Ever wonder how astronomers found supermassive black holes? A lovely student essay on the subject by Mark J Reid, winner of a competition run by Harvard University, can be found on nautil.us.

What fresh hell is this?
Great reporting in Wired by Nicholas Thompson and Fred Vogelstein last week on a year of chaos in Mark Zuckerberg’s Facebook empire.

Most viewed

Most viewed