Uncategorized › Facial recognition: ten reasons you should be worried about the technology
Facial recognition technology is spreading fast. Already widespread in China, software that identifies people by comparing images of their faces against a database of records is now being adopted across much of the rest of the world. It’s common among police forces but has also been used at airports, railway stations and shopping centres.
The rapid growth of this technology has triggered a much-needed debate. Activists, politicians, academics and even police forces are expressing serious concerns over the impact facial recognition could have on a political culture based on rights and democracy.
As someone who researches the future of human rights, I share these concerns. Here are ten reasons why we should worry about the use of facial recognition technology in public spaces.
1) It puts us on a path towards automated blanket surveillance
CCTV is already widespread around the world, but for governments to use footage against you they have to find specific clips of you doing something they can claim as evidence. Facial recognition technology brings monitoring to new levels. It enables the automated and indiscriminate live surveillance of people as they go about their daily business, giving authorities the chance to track your every move.
2) It operates without a clear legal or regulatory framework
Most countries have no specific legislation that regulates the use of facial recognition technology, although some lawmakers are trying to change this. This legal limbo opens the door to abuse, such as obtaining our images without our knowledge or consent and using them in ways we would not approve of.
3) It violates the principles of necessity and proportionality
A commonly stated human rights principle, recognised by organisations from the UN to the London Policing Ethics Panel, is that surveillance should be necessary and proportionate. This means surveillance should be restricted to the pursuit of serious crime instead of enabling the unjustified interference into our liberty and fundamental rights. Facial recognition technology is at odds with these principles. It is a technology of control that is symptomatic of the state’s mistrust of its citizens.
4) It violates our right to privacy
The right to privacy matters, even in public spaces. It protects the expression of our identity without uncalled-for intrusion from the state or from private companies. Facial recognition technology’s indiscriminate and large-scale recording, storing and analysing of our images undermines this right because it means we can no longer do anything in public without the state knowing about it.
5) It has a chilling effect on our democratic political culture
Blanket surveillance can deter individuals from attending public events. It can stifle participation in political protests and campaigns for change. And it can discourage nonconformist behaviour. This chilling effect is a serious infringement on the right to freedom of assembly, association, and expression.
Read more: Surveillance cameras will soon be unrecognisable – time for an urgent public conversation
6) It denies citizens the opportunity for consent
There is a lack of detailed and specific information as to how facial recognition is actually used. This means that we are not given the opportunity to consent to the recording, analysing and storing of our images in databases. By denying us the opportunity to consent, we are denied choice and control over the use of our own images.
7) It is often inaccurate
Facial recognition technology promises accurate identification. But numerous studies have highlighted how the algorithms trained on racially biased data sets misidentify people of colour, especially women of colour. Such algorithmic bias is particularly worrying if it results in unlawful arrests, or if it leads public agencies and private companies to discriminate against women and people from minority ethnic backgrounds.
8) It can lead to automation bias
If the people using facial recognition software mistakenly believe that the technology is infallible, it can lead to bad decisions. This “automation bias” must be avoided. Machine-generated outcomes should not determine how state agencies or private corporations treat individuals. Trained human operators must exercise meaningful control and take decisions based in law.