Facial recognition

Facial recognition technology as a measure to enhance public safety

As part of the European Union’s ‘smart city strategy’ plan, more and more cities throughout Europe have started adopting new technologies in order to transition into a new digital future. With the enhancement of safety in public spaces being a key feature of a smart city, local authorities opt for tools that are intended to contribute in the combat against crime, such as facial recognition technologies. More specifically, the High Court of England and Wales has ruled on whether the use of Automatic Facial Recognition (AFR) by South Wales Police in public spaces can be considered as a violation of article 8 of the ECHR. The court ruled in favour of AFR as ‘the powers of the police to prevent and detect crime outweigh’ any interference its use might have on individuals. This article outlines the considerations that a public or local authority may want to consider when evaluating the introduction of automated facial recognition technology.

Is facial recognition technology too intrusive to individuals’ privacy?

The use of – what has been characterised as – intrusive and disruptive technologies are often justified by the overriding public interest, public safety and prevention of crime justifications. Law enforcement authorities can make use of different kinds of facial recognition technology in order to investigate crime but to detect and prevent it in the future. The use of live facial recognition in public spaces has been criticised as intrusive to people’s privacy, as it automatically collects biometric personal data of people walking, crossing the street, and going about their daily lives. It is note-worthy to mention that local authorities claim that the individual whose biometric personal data is being collected is aware of the data collection and purpose of processing. Whenever the police are investigating a crime and being aided by the use of live facial recognition, they are collecting real-time data of people passing by a specific area where a crime has taken place.

On the other hand, retrospective facial recognition (RFR) collects personal data from various public spaces and stores them in a dedicated database, which is then used against a database containing the biometric data of people who have been in police custody. This eventually aids the police to detect and prevent crime, as they attempt to identify individuals that have been associated with crimes in the past. This type of technology is used to analyse, compare and match the faces of random passers-by with individuals who have previously been charged by the police and thus have a criminal record. On the use of RFR, London’s Metropolitan Police seems to be adopting a different approach than the South Wales Police, as mentioned above. According to the Met, any facial recognition technology used under the police’s mandate in public spaces, is only used as a means to tackle very serious crime which includes, inter alia, gun and knife crime and child sexual exploitation.

London to become a ‘safer’ city with the use of facial recognition

The Office for Policing and Crime has signed a 4-year contract with Northgate Public Services and is planning to install RFR technology all over the city. The Deputy Mayor of Policing and Crime, Sophie Linden, justifies the use of such technology on public safety, crime prevention as well as reducing time consuming activities such as manual identification of people with pictures by police officers when this can be executed by algorithms. However, in light of the controversy around these tools, the Metropolitan Police is enhancing transparency by consulting the London Policing Ethics Panel (LPEP) on the upcoming installations. Consultation with this independent panel is intended to enhance the confidence of the public on issues related to policing and provide its ethical advice on same.

Brexit and the proposed AI Regulation

However, the deployment of these types of facial recognition technologies for law enforcement purposes is another area in which the UK is set to diverge from European law. In the draft text of the proposed EU AI Regulation, the use of “‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” falls under the category of “Prohibited Artificial Intelligence Practices” unless certain very specific criteria are met (See article 5(1)d). As such, it is unclear how the introduction of AI tools and legislation will impact the relationship between the UK and EU in terms of data governance, technological innovation, and data protection.

Trilateral’s Data Governance and Cyber Risk Team has been monitoring developments in the divergence between UK and EU law as a consequence of Brexit. Contact our team if you need advice on data governance services or compliance support in relation to Data Protection or Artificial Intelligence. Please feel free to contact our advisors, who would be more than happy to help.

Alkmini Gianni

Alkmini Gianni is Data Protection Advisor at Trilateral Research.

Sign up for our newsletter

Join our mailing lists to receive updates about our latest research and to hear about our free public events and exhibitions.  If you would like to find out more about how we manage your personal information please see our privacy policy.