Speakers at the Data Protection Forum in early March reinforced my reasoning that the Data Protection and Digital Information Bill (“DPDI Bill”) should be used as a vehicle to implement the EU’s AI Act. [Obviously my Petition which states this should also be supported: so please sign it! – see references].
One speaker, at the end of her presentation, made several personal comments about the risks associated with the fragmentary, “wait and see” approach of the UK Government towards AI regulation.
She pointed out that, in the UK, this meant that the five pillars of AI trustworthiness (Fairness, Robustness, Privacy, Explainability, Transparency: see references) could all to be interpreted differently by about 90 Regulators.
In further detail, these “trusty” five pillars, largely developed by the AI industry itself, can be summarised thus:
- Fairness: ensure that AI models don't exhibit biased behaviour when considering factors such as age, gender and ethnicity.
- Robustness: focus on the ability of AI models to perform well under exceptional conditions and adapt to changing circumstances. Comment: I assume this covers the change in AI functionality if there were to be another pandemic (the latter being the exceptional condition which could vary society’s usual modus operandi).
- Privacy: ensures that data used for training AI models remain protected and that insights derived from the models are under the control of the model builder. Comment: protected from whom? Does “under the control of the model builder” exclude data subjects (e.g. their rights)?
- Explainability: involves being able to understand and explain the decisions made by AI models to end users and decision makers. Comment: does this include data subjects?
- Transparency: emphasises making all relevant information about AI models easily accessible and inspectable. Comment: accessible and inspectable to data subjects?
The point being made in the “Comments”, is that any reference to “data subjects”, whose personal data are likely to be at the centre of much AI processing, is notably absent, especially with respect to the pillars relating to Privacy, Explainability and Transparency.
Precautionary Principle
Given the obvious risks to data subjects, the Government’s laissez-faire approach to AI regulation abandons the Precautionary Principle towards its governance.
This Precautionary Principle arises when the risks of an action on society is high and the evidence about the future impact on that society of that action is uncertain. I would argue that this fits the circumstances surrounding the development or use of AI to a tee.
It is this Principle which explains why the EU has enacted the Data and AI Acts that apply to the processing of all forms of data by AI. By contrast, the Government’s approach sees this Principle as an obstacle to progress.
Indeed, the Government has deliberately weakened this Principle to accelerate such “progress”, by reducing the protection afforded to data subjects.
Data subjects at risk
For example, the A.22 UK_GDPR prohibition on profiling or automated decision taking is reversed. Under the Bill, AI profiling/automated decisions can take place, in general, subject to a few exceptions usually associated with the processing of special category of personal data.
Yes there safeguards, but these all apply post automated decision when the decision has been taken effect and has impacted on the data subject. There has been very little consideration of implementing safeguards prior to that AI induced, automated decision.
Secondly, much AI testing and development is also likely to fall within a revised “scientific research” definition. Thanks to the Bill’s new transparency exemption for scientific research, such research can allow databases of personal data to be exchanged with Third Parties for their AI development purposes without existing data subjects knowing, so long as some (dodgy) “appropriate safeguards” apply.
One such appropriate safeguard is that substantial damage/distress to any data subject is unlikely; this merely means that moderate damage/distress to data subjects is acceptable to this Government!
There are impacts on data subjects’ rights. How can data subjects exercise their right to object to the processing if they’re kept ignorant of who processes their personal data because of the new transparency exemption associated with scientific research?
Finally, such personal data exchange for scientific research purposes (i.e. AI development; algorithm testing) is also deemed “compatible” with the purpose of collection by the DPDI Bill. In this way, the Bill throws precautions to the wind, and legislates its way around the provisions in Article 6 (lawfulness of processing) that could prevent unlawful processing of personal data for AI development purposes.
Concluding comment
The Precautionary Principle, if followed, would add AI clauses to the DPDI Bill which were aligned with the EU Data and AI Acts (both of which maintain the GDPR standard). Such amendments could be tabled for the Bill’s Report stage in the House of Lords (e.g. expected to be over five-days in May/June).
This Bill, once enacted (e.g. before the summer recess) will preserve the ability of the UK’s AI businesses to develop services that meet the requirements of the only available, recognised, international legal (and precautionary) standards that apply to AI.
These AI legal standards might not be the best, but they establish an important safety break in UK law. That is what the Precautionary Principle is all about. In addition, as the UK is no longer an EU state, the next Government can tweak these provisions if they do not work as intended.
The alternative is to adhere to this Government’s policy towards AI and put UK’s data subjects in harms’ way.
Forthcoming Data Protection Courses
The following BCS Practitioner or Foundation courses can be attended in person, or via Zoom, or as a mixture (i.e. part Zoom, part attendance just in case “stuff happens on the day”).
- Data Protection PRACTITIONER Course is in London on Monday, 18 March to Friday, 22 March (5 days: 9.30am to 5.30pm).
- Data Protection FOUNDATION Course is in London on (April 9-11: Tuesday to Thursday, 3 days: 9.45am to 5.00pm)
- All-day Zoom workshop (10.00-4.30) on the DATA PROTECTION AND DIGITAL INFORMATION BILL held on Thursday 9 May. Email [email protected] for workshop agenda or to reserve a place on this session.
References
Regulation 2023/2854 (Data Act} on “Harmonised rules on fair access to and use of data” (published in the OJ on 13 December 2023). See also my updated Hawktalk blog dated 5 Jan 2024 (DPDI Bill combines with EU's Data Act and AI Act to strangle the UK’s AI industry?).
The Petition is on: https://petition.parliament.uk/petitions/652982. Please consider signing it and passing it to your colleagues to consider signing.
Details reasons why you should sign the Petition are on my Hawktalk blog: https://amberhawk.typepad.com/amberhawk/2024/03/petition-calls-for-implementation-the-eu-data-act-to-protect-data-subjects-from-ai-abuse.html
Chapter and verse on the relationship between scientific research and AI: https://amberhawk.typepad.com/amberhawk/2023/10/dpdi-no-2-bill-undermines-transparency-of-artificial-intelligence-development-and-training.html
Example used is IBM’s five pillars: https://www.toolify.ai/ai-news/the-five-pillars-of-trust-for-ai-a-guide-to-building-reliable-and-ethical-ai-systems-2584388
Comments
You can follow this conversation by subscribing to the comment feed for this post.