In a recent discussion with an acquaintance the following question was posed; would it bother you if a biometric template of yourself, any biometric at all, was stored by a third-party, be that a bank, healthcare provider, government agency or any other responsible entity?
Interestingly, the answer was cautious and came with a number of caveats, primarily around consent and how securely this template would be stored.
The underlying caution was based on the assumption that if the template was somehow stolen, my acquaintance’s very existence would be terminally compromised. The answer was not particularly surprising, and I suspect representative of many peoples’ understanding of the nature and use of biometric data.
The question of consent, firstly, is highly topical and in no way straightforward. A recent example of the potential pitfalls, this one involving the UK’s tax agency, HMRC, highlights the need to be conversant with the regulations governing consent.
HMRC is extremely well versed in fraud and its detection and prevention, given the significant value of welfare and VAT fraud in the UK. HMRC has also recognised that a weak form of proxy authentication commonly used in call-centres, Knowledge Based Authentication (KBA), is far too easily compromised by fraudsters.
It correctly decided to implement a voice biometric authentication solution to replace KBA and improve not only the security, but also reduce the time taken to authenticate callers. In large call-centre operations – this alone can save significant amounts of money.
Unfortunately, as the linked article explains, millions of enrolled users will have their biometric records deleted because consent was not obtained in accordance with the requirements of the General Data Protection Regulation (GDPR). The GDPR is very specific regarding the consent of individuals for the processing of their biometric data.
Whilst GDPR is a European regulation, almost every country has their own data protection and privacy regulations with regards to biometrics and particularly the consent required to store and use them. Whilst many of these regulations are similar, it is still essential to understand the requirements and best-practice of each individual country.
Vendors that undergo data protection and privacy certification through independent certification bodies, such as EuroPriSe within the EU, provide a high degree of confidence to their clients that the solution will be implemented and used in a compliant fashion. Such certification is not a trivial task but it’s absolutely vital when dealing with privacy sensitive data such as biometrics and location based data.
Whilst the end-client, as the data controller, is ultimately responsible for compliance with any applicable data protection legislation, the onus needs to be on biometric vendors to understand the data protection requirements of the countries in which they operate and to advise their clients appropriately. They, after all, are the experts in the technology and that needs to extend to not only use-cases, but the applicable laws and legislation and best practices.
So, the lesson from the HMRC episode is that whilst its choice of voice biometrics was the correct technological choice, its implementation doesn’t appear to have taken into account the data protection requirements that accompany biometrics in the relevant jurisdiction(s).
Whilst this may seem pretty straightforward, i.e. understand, abide by and implement a biometric solution in accordance with the governing regulations, the reality of the specific Use Case may be very complex, and this is obviously where deep specialist knowledge is required.
Depending on where we live, businesses we visit or places where we travel, we may be biometrically processed on a daily basis without ever having knowingly consented to it. In most cases this occurs without your biometric data ever being stored, but perhaps not all.
When visiting a bank branch, as soon as you enter the door, your face may be biometrically processed to automate the Identity and Verification (ID&V) process for the bank’s clients, or it could be to ensure you are not a known criminal or fraudster. In the first example, if you haven’t consented and enrolled in the system you will not be matched or identified because you don’t biometrically exist in a database. However, because you also don’t enter the branch holding up your account number, it can’t be determined whether you have consented or not, so your biometric data are processed regardless.
This is even the case if the data are not retained: the simple capture of personal data, even if fleetingly, often already constitutes a form of processing in terms of the GDPR.
In the second example of checking whether you’re a criminal or fraudster, you are processed against a database of known robbers or fraudsters, commonly referred to as a “Blacklist”. This same Blacklist processing occurs when a person rings their bank’s call-centre and speaks with an agent. In this instance the caller’s voice is biometrically processed against the biometric voice models of known fraudsters who have been recorded and biometrically enrolled into a Blacklist once a fraud has been perpetrated and linked to their voice.
You may be told the call will be recorded for “training and legal purposes”, but I can’t recall ever being told I would be recorded for fraud checking purposes, or that it involved biometric matching. In any case, such mere informing does not mean (valid) consent is obtained under the GDPR. Also, if a person is wrongly identified as a criminal by means of the biometric check, this can constitute a serious violation of his or her data protection rights.
A recent legal hearing in Cardiff, Wales, has highlighted the potential grey areas of what legal frameworks are required in order to deploy biometric processing under the GDPR.
An individual, Ed Bridges, has launched a legal claim against the use of automated facial recognition (AFR) cameras and associated technology on the basis that their use by the South Wales police was an unlawful violation of his privacy and that it breached data protection and equality laws.
The individual in question believes his image was captured by an AFR camera in Cardiff as he left his office to get some lunch. Whilst it has been revealed only two people were arrested on the particular day of the AFR operation, his lawyer argued that the thousands of people scanned was not justified for two arrests, and further his client should not have been scanned and processed, whilst being in a public space, if he had not provided consent and was not suspected of any wrongdoing.
South Wales police counter that it does not infringe on the individual’s data protection rights as it’s used in the same way as photographing a person in public and further that the data is not retained if the individual is not a “person of interest”. The first point is dubious as a photograph may not constitute a biometric if not used to identity a person, whereas facial recognition processing clearly is strictly regulated under the GDPR.
The second point, that of retention, is where it all gets a bit confusing. Even if your biometric is not recorded in a database and there is no retention of it, the fact it is captured, processed and compared to templates in a database would certainly still constitute “processing” for the purposes of GDPR.
The information commissioner, acting as an “intervener” in the case, stated that the conversion of facial images to raw information “involves large scale and relatively indiscriminate processing of personal data” which could constitute a “serious interference” with privacy rights.
Evidently there are three UK police forces using this AFR technology in public spaces at the moment, being the South Wales, Leicestershire, and the London Metropolitan police forces.
The Cardiff case follows on from an earlier case in 2012, in which the High Court ruled, in the case of RMC, that the retention of images from un-convicted individuals under the Metropolitan Police Service’s policy for the retention of custody images was unlawful, because the legal framework was inadequate.
The Cardiff claimant argues that the AFR operation in which he was scanned also took place without an appropriate legal framework. The case led to a review of the use and retention of custody images.
These examples of biometric processing refer to what’s known as Identification, which is a 1:n (or ‘one to many’) comparison technique.
The more well understood form of biometric usage is known as Authentication or Verification, which is a 1:1 comparison. In authentication solutions the subject must have previously been enrolled in the system and must also make a claim of identity, whether implicit or explicit, prior to being authenticated.
The claim of identity provides uniqueness, which could be an account number, a National Insurance number, a mobile phone number or any other identifier that makes a person unique to an entity. This is used to retrieve the appropriate biometric template for the comparison to be performed upon.
It is, I suspect, the latter technique that the authors of the GDPR had in mind when they framed the rules around biometric processing and the gaining of consent.
And this brings me back to my discussion around people’s perceptions on the risk of biometrics. The GDPR is very specific about the use of biometrics and refers to it as an especially sensitive category of personal data that warrants extra protection. But is that really the case when compared to other proxy forms of identity such as PINs and Passwords?
Firstly, there is probably a widely held view that a biometric, when stored in a database, is somehow the same as what you hear or see, i.e. a voice biometric template is audio of someone speaking, a fingerprint biometric template looks just like a fingerprint, a facial biometric template looks like an image of someone’s head and so on. If this were the case one could understand the anxiety of such a database being hacked or stolen. Suddenly you could be spoofed.
However, in good biometric systems this is not the case. In such systems, biometric templates are digital representations produced from complex algorithms. A voice biometric template, for instance, contains no audio and is not a ”sound” file. It is in fact a series of random, meaningless numerical digits to anyone else. What’s more, when implemented properly, it cannot be reverse-engineered back into its original audio form.
This would be the same of any properly constructed biometric modality template. In other words, if the template fell into the hands of a bad actor there is nothing they can do with it. This is equally applicable whether or not the template is encrypted, however, in a good biometric system, encryption is always applied.
Compare that with a proxy form of identity such as a simple PIN or password. If stored (or transmitted, such as an SMS) unencrypted, an all too common occurrence, a PIN or password is immediately identifiable and can be used by anyone with possession of it. But of course, you don’t need to hack a database to obtain this information, as knowledge alone will suffice. And bad actors have many ways of illicitly obtaining this information (through deception, such as phishing, or simply ‘shoulder surfing’). One-time passwords are also susceptible to interception, with SIM Swap fraud in particular a major problem.
So, whilst some people are averse to the idea of their biometric data being held by a third party, we have seemingly no qualms about signing up to any number of web sites or registering with countless apps, many of which require us to create a password, which the provider of course holds. And herein lies another potential problem. We know that many people use the same password for virtually everything to avoid the problem of forgetting them. But how do we know that every site or app we register for will hold this data in a secure encrypted manner? It would appear we are more concerned with something which can’t be used against us than something which can.
It would seem therefore, there are a couple of issues to hand. Firstly, it is unclear, certainly in Cardiff, as to what constitutes biometric processing and whether it’s allowed without explicit consent, or what legal framework is required. At the time of writing no decision has been handed down on the judicial review proceedings.
Obviously, this could have a huge impact on biometric “scanning”, using the technology to identify people who may or may not have provided consent to have a biometric template of themselves created and matched against a database, if only transiently and only for seconds. The ramifications of this within the EU would be far reaching, affecting crime prevention and no doubt anti-terrorism. Banks too, could be adversely affected if the concept of black-list processing were to be banned.
Secondly, with the GDPR emphasis on biometrics as distinct to all other proxy forms of identification, along with certain people’s perception of the risk, a la Mission Impossible style cloning or re-engineering, there is a potential education issue involved. The often used argument that if the crooks get your biometric template you have lost your identity for ever more is certainly a myth when a reputable, certified biometric solutions provider is used.
Whatever happens with the Cardiff judicial review, I’m sure we haven’t heard the last of this until the definition of biometric processing and the GDPR rules on consent for biometric identification, rather than authentication, are further clarified.
By John Petersen, SVP Asia Pacific at ValidSoft
No comments yet