Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Let’s talk

Opposing facial recognition: why focusing on accuracy misses the point

Not a viable solution to facial recognition (source: private).

Preliminary remark: This article was first published on medium.com on February 4, 2020.

Facial recognition has come under massive scrutiny, not least since its live variant has come to attention. Approaches to using it are quite divided. While China uses the technology routinely and extensively in order to surveil their citizens’ everyday life; San Francisco, notably the ‘home territory’ of those companies driving the development of this type of technology, has banned it last spring.

Somewhat surprisingly to any critical observer of the debate, the London Metropolitan Police has announced recently that it would use live facial recognition cameras on London streets, citing “a duty (sic!) to use new technologies to keep people safe”. Apart from raising questions about the allegedly default superiority of technology over other means when it comes to safety, this move has also triggered sharp criticism pointing out the staggering number of misidentifications as a reason why this technology should not be used.

The argument is as follows: it is irresponsible to deploy facial recognition if it generates only 5% of correct results or 18% in the ‘best case’. It is not reliable, and therefore not safe.

I admit that I chuckled when I read that the South Wales police’ use of facial recognition to pick out criminals at the UEFA Champions League Final in Cardiff in June 2017 generated 92% false positives. It seems pretty harmless at first sight: you are on your way to a football game, alas, you never make it there because the police, or rather, their AI, figures out you are a wanted criminal. Instead of attending the game, you spend a couple of hours at the police station, verifying your real identity… Not much harm done (says a non-football fan, obviously).

But what if facial recognition ‘catches’ you in a very different situation, e.g. on your way to a job interview, or even worse, on your way to visit a family member in the intensive care unit? It’s not difficult to think of various situations, in which a ‘deviation’ through a poorly trained AI can cause significant harm and emotional pain. Even more so when it is driven by an underlying discriminatory logic. Studies have shown (e.g. the one by Joy Buolamwini) that facial recognition is much more likely to be wrong in identifying people with darker skin.

These are important and very worrying findings that emphasize the overall problem with algorithmic discrimination. But what does it imply when we argue against facial technology based on empirical arguments such as overall low accuracy and particular bias with regards to certain ethnic groups? In the worst case, proponents of facial recognition could use this argument to their advantage and take it to underline the need to train the technology better to make sure it improves its accuracy — as apparently done by Google last year. Better training means collecting more data. And collecting more data often means intruding into people’s privacy. We might open the floodgates for even larger scale data collection.

Plus, do we really want facial recognition to be perfectly accurate? The higher the accuracy the lower chance to escape it.

Thus, what is meant as a well-intended rebuttal of facial recognition could be turned into an argument for more facial recognition, for the sake of improving its accuracy.

A safer way to argue against facial recognition is based on moral arguments, not empirical ones. The pitfalls of basing an essentially moral argument on empirical facts are excellently illustrated by Peter Singer in his text “All Animals Are Equal” (free online version here).

Addressing equality of humans before turning to animals, Singer claims that we cannot base our claims for ‘equality of humans’ based on ‘factual observations’ because it is undeniable that “humans come in different shapes and sizes… if the demand for equality were based on the actual equality of all human beings, we would have to stop demanding equality. It would be an unjustifiable demand”.

Therefore, Singer concludes that we must not make our opposition to racism or sexism dependent on the unacceptability of genetic explanations; instead,

we should make our claim to equality “a moral ideal, not a simple assertion of facts”.

Let’s take Singer’s line of thought and apply it to facial recognition. It follows:

if we base our rejection of facial recognition on empirical arguments such as low accuracy, it risks to lose its validity as soon as the technology has improved. And the technology is very likely to improve.

Opposing facial recognition based on its low accuracy is therefore at the very least insufficient or even dangerous. Instead of engaging in battles about the correct number of misidentifications or about the threshold required to legitimize facial recognition, we should focus on what facial recognition really means, regardless of its accuracy, namely “the end of privacy as we know it” and a threat to our civil rights. Our faces are central to our identity. We must treat them as inalienable in the true sense of the word (in-alien-able). But since we cannot (and do not want to) hide our faces to protect them from surveillance, or spoken in economic terms, since we do not want to cut back our supply, there is only one way to assure that their inalienable quality is respected, namely by tackling the ‘demand side’. And this can only be done if we ban facial recognition.

AI & Ethics

Topics Artificial Intelligence & Ethics Never before has technological progress been faster, and never before has awareness of the ethical implications been so great. Especially

Read More »
Photo by rawpixel on Unsplash

What makes AI ethicists “the top hire companies need to succeed”?

KPMG ranked «AI ethicist» as one of the «top 5 AI hires companies need to succeed in 2019». That’s good news for an ‘old business ethicist’ like me. However, there is no common understanding whether we need AI ethicists in the first place, and whether creating such a profile inevitably leads to «machinewashing». I address these concerns and argue what it takes to really make AI ethicists a top hire.

Read More »

Subscribe to my newsletter