This blog concerns the information processed by the facial recognition system used by South Wales Police (SWP) in the recent judicial review case (see references). In summary, I don’t think the Data Protection Act 2018 (DPA2018) applies to most of this processing because the information processed by the system is not personal data.
This is not the conclusion the Court arrived at in its recent judgment (the “SWP judgment”); the blog explains what this judgement missed and why its conclusions are on “shaky ground”.
If I am correct, then there is a major privacy problem with law enforcement systems that use biometrics to identify individuals as they can be constructed to avoid a data protection regime.
Factual background
Before pursuing my different view, I quote from the Privacy Impact Assessment (PIA; see references) associated with the facial recognition system used by SWP and the judgment itself. This gives the key background facts used by this blog.
“The collection of personal information is via two CCTV cameras connected to the standalone laptop/server. The system ‘extracts’ a face from CCTV footage and then compares it against a pre-defined watch-list” (from SWP’s PIA).
“The watch-lists used in the deployments in issue in this case have included (a) persons wanted on warrants, (b) individuals who are unlawfully at large (having escaped from lawful custody), (c) persons suspected of having committed crimes, (d) persons who may be in need of protection (e.g. missing persons), (e) individuals whose presence at a particular event causes particular concern, (f) persons simply of possible interest to SWP for intelligence purposes and (g) vulnerable persons” (SWP judgment, paragraph 30).
Retention times: “...(2) Facial images that are not matched against: immediately deleted. (3) Biometric template (regardless whether match made): immediately deleted. (4) Facial images alerted against: images either deleted immediately following the deployment, or at the latest, within 24 hours following the deployment” (SWP judgment, paragraph 38).
“The CCTV feed will of course be itself saved and that data management covered under a separate PIA” (from SWP’s PIA).
There are four sets of processing circumstances to explore. The processing of data that concerns: (a) an individual who is on the watch list; (b) an individual who is not on the watch list; (c) an individual who is mistakenly identified by the facial recognition system as someone who is on the watch list (a false positive); or (d) an individual on the watch-list who is not identified as being on the watch-list (a false negative).
I am assuming that as facial recognition techniques improve over the next few years, the number of false negatives and false positives will become much reduced (perhaps vanishingly small). Section 50(4) of the DPA2018 permits the Home Secretary to introduce further safeguards with respect to these situations.
In other words, I am not discussing cases (c) and (d), although several false positive errors have been publicised (but, in mitigation, the PIA states that the use of facial recognition technology is at “proof of concept” stage). False negative errors will involve “lucky” criminals (whose luck will increasingly run out as these facial recognition systems improve).
Note that SWP’s facial recognition processing covers both the GDPR and Law Enforcement. Processing about “missing persons” is subject to the GDPR; processing in relation to “persons suspected of having committed crimes” is squarely in law enforcement parts of the DPA2018.
Also, I am not going to consider any routine CCTV processing which is separate to the processing related to facial recognition (e.g. SWP kept CCTV images for 31 days as per usual for many police CCTV systems).
This means the blog assumes that the facial recognition system is operational, stand-alone and sends alerts to police officers when it matches someone on the watch-list.
Why the Court concluded personal data were processed?
I was informed that there was a half-day argument in Court about whether or not personal data were processed by the facial recognition system when the system did not identify a match. The Court concluded the answer was “yes” as can be seen from the following extracts from the judgment:
“Starting from the definition of personal data in the DPA 1998, it is apparent that the scope of information that is personal data is not limited simply to information about persons whom a data controller has identified by name. The definition is formulated in wider terms as to whether a person “can be identified” either from the data in issue, or from that data and other information held by the data controller, or from that data and other information likely to come into the data controller’s possession. (paragraph 113; judgement’s emphasis)
Comment: the judgment, by emphasising “can” in the above extract, is concluding that there the no need to actually or directly identify the individual (e.g. by name); “can” is suggestive of something far flexible (e.g. where “can” means “has the potential to be identified indirectly”).
Then, a subset of Recital 26 of Directive 95/46/EC is quoted:
“Extracting from that definition the matters particularly pertinent to the case before us, we can see no distinction between the definition in the DPA 1998 and the notion in the 1995 Directive that an “identifiable natural person” is one who “… can be identified directly or indirectly… by reference to…factors specific to his physical … identity”. (paragraph 114 of judgement)
Comment: the indirect identification by “factors specific to his physical …. Identity” is, in essence, what facial recognition is all about.
The clincher, however, appears to be contained in the Vidal-Hall v Google Inc judgement (see references) when the SWP judgement stresses (by underline italics):
“…..If section 1 of the 1998 Act is appropriately defined in line with the provisions and aims of the Directive, identification for the purposes of data protection is about data that “individuates” the individual, in the sense that they are singled out and distinguished from all others…..” (Paragraph 115, judgment’s emphasis of Vidal-Hall).
Hence the conclusion that:
“In our view, the Claimant succeeds on his argument that the processing of his image by the AFR Locate equipment was processing of his personal data ... He succeeds on the basis that the information recorded by AFR Locate individuates him from all others, i.e. it singles him out and distinguishes him from all others” (paragraph 122 of judgment).
Two problems with the SWP judgment
There are two main problems for this judgment which, surprisingly, were not raised by any of the legal teams. These problems are:
- Directive 95/46/EC does not apply to the processing of personal data for law enforcement policing. Article 3(2) of the Directive states that “This Directive shall not apply to the processing of personal data…and the activities of the State in areas of criminal law”. Comment: this undermines the judgment’s reliance on Recital 26, the Vidal Hall judgment (and its CJEU references to Rynes and Breyer) to draw its conclusions. Note: the application of A.3(2) does not wholly undermine the judgment as some processing is not for law enforcement purposes and subject to the Directive (e.g. missing persons).
- There is no reference to Recital 21 of the Law Enforcement Directive (which is word for word the same as Recital 26 of the GDPR). I think it is this Recital which halts any application of this judgment to the DPA2018 regime (and to any residual GDPR element).
Why personal data are not processed under the DPA2018?
Personal data have to relate to an identifiable living individual; the key issue with the SWP facial recognition system relates to the identifiability of that individual. If the individual is not identifiable, then personal data are not processed.
In my view, the information processed by the facial recognition system definitely relates to a living individual; however, such individuals are not identifiable if their biometrics do not match the biometrics on a watch-list and if their biometrics are deleted immediately after matching.
This is a difficult interpretative issue; however, my analysis is consistent with the relevant Recitals of the Law Enforcement Directive 2016/680 and GDPR (which were missed in the SWP judgment).
Fortunately, Recital 21 (LED) and Recital 26 (GDPR) have the same text which I have split into three parts for convenience (and emphasised as follows):
- “…..To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly”.
- “To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments”.
- “The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person...”.
Objective factors
The “objective factors” (second paragraph) with reference to the case of an individual who is not on the SWP watch-list are as follows. First the facial image is captured, the biometrics associated with that face are generated, then these data are analysed by comparison with the watch-list biometrics, and finally, these biometrics are deleted immediately as there is no match.
So, the question is: “do these objective factors support the notion that the information processed constitute personal data of an identifiable living individual not on a watch-list?”.
Consider the “amount of time required for identification” (i.e. the time the captured biometrics of an individual not on the watch list will be stored)? This, remember, is the time taken for the facial image of an individual to be captured, followed by the generation of biometrics, followed by matching against the biometrics of the complete list of suspects.
Of course, if the watch-list is extensive, the time taken to check them all might be lengthy. However, as press reports state that “China is building the world’s most powerful facial recognition system with the power to identify any one of its 1.3 billion citizens within three seconds” (see references) one concludes that the technological developments will eventually ensure the time to perform a facial recognition match will be measured in seconds (if that).
So, let us assume, the facial recognition system run by SWP is fully operational, and the “amount of time required for identification” for an individual who on the watch-list is, say, three seconds.
These few seconds will also be the maximum time any individual’s biometrics, who is not the watch-list, will be stored on the system before deletion. This is because the maximum time to identify someone on the watch-list, is the same time that is needed to conclude that someone is not on the watch-list.
And what is “reasonably likely” to happen in these few seconds to make the data relating to this living individual, who is not the watch-list, identifiable?
If such an individual is to be identifiable, there has to be something (e.g. an action) that is reasonably likely (e.g. a processing operation such as retention, use, disclosure) and which is more reasonably likely than the processing operation of “delete” (which is embedded in the functionality so the individual is not identifiable).
In other words, the “objective facts” show that the processing that is most “reasonably likely” is immediate deletion of the data; note that such deletion is the only processing operation that guarantees the individual is not identifiable. Any other processing operation (e.g. retention, use, disclosure) opens up the argument that the processing is undertaken in order to make it “reasonably likely” that the individual concerned is to become identifiable.
It thus follows that the individual is not identifiable, is not a data subject, and the information is not personal data. (The same analysis applies to Recital 26 of the GDPR as it is the same text as Recital 21 of LED).
A recipe for secret surveillance
So, suppose a law enforcement agency installs a stand-alone and operational facial recognition system (e.g. which does not include any normal CCTV functionality such as storage of all images).
Such a facial recognition system that immediately deletes the biometrics and images of those who are not on the watch list, is processing information which does not relate to an identified or identifiable natural person. It follows there are no data protection obligations (e.g. no transparency) as no personal data are processed.
For information relating to those on the watch-list, there are data protection obligations as the information is personal data. But since the vast majority will be suspects relating to criminal activity, then the rights of data subjects and transparency arrangements can be minimal (if not wholly exempt).
For instance, if being transparent would prejudice a criminal investigation, national security, effective immigration control, assessment of any tax or duty, any legal obligation etc (just go through the exemptions in Part 3, Part 4 and Schedules 2 to 4 of the DPA2018) then transparency could well be exempt.
As the SWP’s PIA refers to possible future linkage to the Passport Office (biometric photos?), ANPR and DVLA databases, and the extension of facial recognition techniques to body worn cameras, then it is clear that these links could result in the development of a very extensive watch-list.
Watching Brief
So, how does one qualify to have one’s biometrics placed on a watch-list? Who decides who is listed? How does one get off a watch-list? How long does one remain watch-listed? What is the extent of the watch-list?
If I were to be critical of SWP’s PIA undertaken under the DPA1998 (references), there is not a complete description of the types of “persons of interest”. For instance, the statement in the PIA which states “persons of interest …could be persons wanted on suspicion for an offence, wanted on warrant, vulnerable persons and other persons where intelligence is required” (my emphasis) is arguably misleading, if the word “could” were to disguise a very extensive list of other “persons of interest” and reasons why they are of interest.
To make my concerns absolutely clear, who would be a on a watch-list if such facial recognition systems were implemented by Hong Kong’s law enforcement authorities?
The conclusion I have reached is that, unlike the SWP judgement, that the DPA2018 cannot regulate any facial recognition system used by any law enforcement or national security agency if they maintain watch-lists of those who are of interest to the agencies and where data relating to non-matches are deleted immediately.
Nightmare scenario? In the next decade, most CCTV cameras under public authority control possess a facial recognition functionality connected to a mega watch-list where data about those not on the watch-list is deleted immediately (as above). Because this analysis relies on the Law Enforcement Directive, this nightmare could occur in any Member State of the European Union.
Sleep well.
Comment added on 8 October 2019: It looks as if A.8 of the ECHR is engaged. Paragraph 59 of the SWP judgment reads ....."
The fact that, save where a match is detected, facial biometric information is retained for only a very short period, does not affect the analysis. The application of Article 8 is not dependent on the long-term retention of biometric data. It is sufficient if biometric data is captured, stored and processed, even momentarily. The mere storing of biometric data is enough to trigger Article 8 and the subsequent use (or discarding) of the stored information has no bearing….
References
Vidal-Hall v Google Inc. [2016] QB 1003
Bridges v South Wales Police (and interested parties; Home Office, ICO, Surveillance Commissioner) [2019] EWHC 2341 (Admin)
The PIA for SWP facial recognition system: Download SWP PIA here
Chinese facial recognition CCTV: https://www.scmp.com/news/china/society/article/2115094/china-build-giant-facial-recognition-database-identify-any
Upcoming Data Protection/FOI qualification courses (Autumn)
- BCS Data Protection Practitioner, (5 days; Edinburgh, October 7-11).
- BCS Data Protection Practitioner Upgrade/Conversion (2 days; London, October 22 & 23).
- BCS FOI (5 days; London, starts November 5 )
- BCS Data Protection Practitioner, (6 days; London, starts November 19).
- Our popular UPDATE session (1 day, London, November 18)
Details on www.amberhawk.com or by emailing [email protected]