Regulation of biometric data in Europe
March 2025 | SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY
Financier Worldwide Magazine
We increasingly rely on technologies that can verify and authenticate our identities to secure our most valued assets – from unlocking smartphones, to gating building access and securing public spaces. However, the use of such technology in the UK and the European Union (EU) is highly regulated by data protection law as well as under the new EU AI Act.
This article explores the key elements of these regulatory regimes and their application to biometric technologies and considers whether the EU and UK regulation strikes the right balance between ensuring privacy and encouraging adoption of valuable security solutions.
Biometric data is information about an individual’s physical, physiological or behavioural characteristics, which allow or confirm the unique identification of that individual. Biometric data – e.g., dactyloscopic data (i.e., fingerprints), facial images or facial mapping data, voice recognition data, retina scan data and iris image data – is a critical component of modern security and access solutions.
The technology in this space relies on pattern recognition, something which has been significantly enhanced through machine learning. As such, we have seen a rapid increase in precision and accuracy of biometric technologies, with adoption spanning a range of sectors, including retail, e-commerce, finance, security, law enforcement and healthcare.
The regulatory environment in the EU and the UK
The General Data Protection Regulation (GDPR) came into effect in May 2018, setting out a comprehensive (and technology neutral) data protection regime for all EU member states (and the UK, which retained an almost identical set of rules after leaving the EU in 2020). Below, we explore how the GDPR specifically regulates biometric data and some of the issues with the interpretation of the law.
The GDPR applies to personal data (i.e., any information relating to an identified or identifiable individual), and features enhanced rules for “special category personal data”. Biometric data is only considered ‘special category’ data where it is processed for the purpose of uniquely identifying a person. Consequently, the first challenge is determining whether a company’s use case falls under the standard regime, or the enhanced rules. While the processing of biometric data (such as a facial scan) to verify an individual’s identity, will clearly be regulated as ‘special category’ data, a grey area exists for related processing activities.
In its guidance, the UK’s Information Commissioner’s Office (ICO) explains that: “if your purpose is to uniquely identify someone, you are processing special category biometric data from the moment you collect the biometric data. It is not the case that you are only processing from the point that you attempt any comparison for identification or verification purposes.”
For example, building a database of facial images does not involve unique identification necessarily, but the underlying purpose of the compilation is likely to be related (e.g., data brokerage to customers involved in developing facial recognition applications). Another grey area exists where the purpose of processing is not unique identification (but something closely related, such as creating a lifelike rendering of a person from which that person is identifiable).
With respect to the processing of special category data, data controllers need to comply with both the general scheme of the GDPR (including its core principles such as lawfulness, fairness and transparency, purpose limitation and data minimisation) plus the enhanced requirements for special category data (including higher security standards, increased transparency and data safeguarding, and crucially, the need for the processing to meet one of the GDPR’s limited ‘conditions’ for processing).
The conditions, which are narrow, specific and strictly interpreted, significantly limit the exploitation of special category data. For example, one condition permits processing only in the course of the legitimate activities of a charity or not for profit body subject to various conditions including that such processing is limited to the members or former members of such body. There is only one, generally applicable condition available: the explicit consent of the individual.
To be valid, a consent under the GDPR must be freely given, specific, informed and an unambiguous indication of the individual’s wishes. In addition, the ‘explicit’ requirement is reserved for special category data processing and additionally requires a clear statement of consent by the data subject. This presents a real challenge: any user interface (which is likely to be limited in the context of an app incorporating facial reignition technology) must be designed to allow for the requisite quality of consent, while not interfering with what should be an efficient and seamless step, for example as part of an onboarding process for a product or service.
One of the core principles underpinning the GDPR is the principle of fairness, which requires that an individual be treated fairly in the context of the processing of their personal data. With respect to biometric recognition, this can be interpreted as ensuring that solutions are accurate and free from discriminatory bias.
In its guidance on biometric recognition, the ICO explains that: “Identifying a lawful basis for processing does not mean your processing is lawful by default. You must also ensure that your use of biometric systems is lawful more generally. For processing to be fair, information must be used in ways that people would reasonably expect and that do not have unjustified adverse effects on them.”
The failings of a biometric identification process could have real world implications, including denial of access to services, rejection from onboarding processes and account exclusions (i.e., being ‘locked-out’ of an account), among others. As a result, any shortcomings of a biometric system in terms of accuracy could lead to a material GDPR violation.
The EU AI Act
The EU AI Act, which entered into force in 2025, regulates artificial intelligence (AI) systems based on their risk profile. Various AI systems are prohibited under the Act, with others regulated heavily as ‘high-risk’ AI systems. AI systems involving biometric data use appear in both the prohibited and high-risk categories.
The following biometric use cases are banned under article 5 of the AI Act: (i) real-time remote biometric identification in public spaces for law enforcement (subject to narrow exceptions for apprehending perpetrators of criminal offences, preventing imminent threats to life or undertaking targeted searches for victims of abduction, human trafficking, sexual exploitation or missing persons); (ii) AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; and (iii) biometric categorisation systems that categorise natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
The use cases detailed in annex III of the Act are regulated as 'high-risk'. Such use cases are permitted, subject to their use being authorised under national or EU law. These cases include remote biometric identification systems (excluding AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be), AI systems intended to be used for biometric categorisation (according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics), and AI systems intended to be used for emotion recognition.
These systems are considered high-risk due to their potential impact on fundamental rights. They can be distinguished from the prohibitions on the basis that they do not involve massive and indiscriminate scraping or pervasive or constant real-time monitoring.
However, while permissible, they are subject to a robust regulatory regime. Providers of high-risk AI systems must, among other things, put quality management systems in place, maintain technical documentation and logs, provide transparent information to downstream users, undertake detailed conformity assessments, incorporate human oversight, and register the system (in certain cases) in the EU high-risk database.
Overregulation or necessary safeguards?
The regulation of biometric data use is certainly necessary to protect fundamental rights. As technology becomes more sophisticated, we need to ensure that we are not subject to pervasive surveillance and detection, which could significantly impact our behaviour. However, we must also ensure that regulation neither stifles innovation nor acts as a deterrent to the adoption of valuable technologies that have the scope to enhance physical and cyber security and reduce fraud.
The AI Act’s risk-based approach to regulatory obligations may strike this balance. But there are many open questions regarding the interpretation of the biometric use cases which could have a dampening effect. For example, we need to better understand the material difference between the prohibition on biometric categorisation under article 5 of the Act, and the biometric classification systems considered high-risk under annex III. We also need to more concretely understand the exemption for AI systems solely intended to verify that a person is who they claim to be.
The GDPR’s technology neutral approach ensures that it captures all potential uses of biometric data (capturing lower risk use cases not caught by the material obligations of the AI Act and ensuring that that their use is subject to standards of fairness, lawfulness and transparency). Nothing is prohibited by the GDPR per se, but sanctions for compliance failures can be significant. So, this is by no means a ‘lighter’ regulatory regime.
In this article, we have explored only some of the compliance steps that should be taken. In reality, the monitoring of biometric systems to ensure fairness, freedom from bias, safety and security, data minimisation and transparency requires full-time resource.
Natalie Farmer is a partner at Fieldfisher LLP. She can be contacted on +1 (650) 313 2379 or by email: natalie.farmer@fieldfisher.com.
© Financier Worldwide
BY
Natalie Farmer
Fieldfisher LLP
Q&A: Tackling the cyber skills gap
Pursuing acquisitions and joint ventures – a cyber security perspective
Health advertising and US privacy law – what is at stake?
Interaction between the GDPR and other EU regulations
Regulation of biometric data in Europe
The sweet voices of robots – cloning voices with AI
Emerging technologies and privacy
Striking the right balance: Australia’s approach to data and AI regulation