Extracts from Committee stage (Lords) (day 3) of the Data Protection Bill - Nov 13
Lord Clement-Jones (LD): My Lords, in moving Amendment 74, I
will also speak to Amendments 74A, 75, 77, 119, 133A, 134 and 183—I
think I have encompassed them all; at least I hope I have. In a way
this is an extension of the very interesting debate that we heard
on Amendment 71A, but further down the pipeline, so to speak. This
group contains a range of possible and desirable changes to the
Bill relating to Artificial Intelligence and the use of algorithms.
Data...Request free trial
Lord Clement-Jones (LD): My Lords, in moving Amendment 74, I will also speak to Amendments 74A, 75, 77, 119, 133A, 134 and 183—I think I have encompassed them all; at least I hope I have. In a way this is an extension of the very interesting debate that we heard on Amendment 71A, but further down the pipeline, so to speak. This group contains a range of possible and desirable changes to the Bill relating to Artificial Intelligence and the use of algorithms. Data has been described, not wholly accurately, as the oil of Artificial Intelligence. With the advent of AI and its active application to datasets, it is vital that we strike the right balance in protecting privacy and the use of personal data. Indeed, the Minister spoke about that balance in that debate. Above all, we need to be increasingly aware of unintended discrimination where an element of a decision involves an algorithm. If a particular system learns from a dataset that contains biases, such as associating female names with family roles and male names with careers, it is likely to reproduce them in its decisions. One way of helping to identify and rectify bias is to ensure that such algorithms are transparent, so that it is possible to see not only what data is being used but the steps being taken to process that data in coming to a particular conclusion.
In all this, there is the major risk that we do not challenge
computer-aided decision-making. To some extent, this is
recognised by article 22 of the GDPR, which at least gives the
right of explanation where there is fully automated
decision-taking, and it is true that in certain respects, Clause
13 amplifies article 22. For instance, article 22 does not state
what safeguards need to be in place; it talks just about proper
safeguards. In the Bill, it is proposed that, after a decision
has been made, the individual has to be informed of the outcome,
which is better than what the GDPR currently offers. It also
states that data subjects should have the right to ask that the
decision be reconsidered or that the decision not be made by an
algorithm. There is also the requirement, in certain
circumstances, for companies and public bodies to undertake data
protection impact assessment under Clause 62. There are also new
provisions in the GDPR for codes of conduct and certification, so
that if an industry is moving forward on Artificial Intelligence
in an application, the ICO can certify the approach that the
industry is taking on fairness in automated
decision-taking... Lord Clement-Jones: My Lords, I thank all noble Lords who spoke in the debate. It has been wide-ranging but extremely interesting, as evidenced by the fact that at one point three members of the Artificial Intelligence Select Committee were speaking. That demonstrates that currently we live, eat and breathe Artificial Intelligence, algorithms and all matters related to them. It is a highly engaged committee. Of course, whatever I put forward from these Benches is not—yet—part of the recommendations of that committee, which, no doubt, will report in due course in March. 7.15 pm
I very much like the analogy the noble Lord, Lord Stevenson, drew between this debate
and the human fertilisation and embryology debate, and I noticed
that the Minister picked up on that. Providing the ethical
framework for AI and the use of algorithms will be extremely
important in the future, and in due course we will come on to
debate what kind of body might be appropriate to set standards
and ethical principles. I quoted the Minister, Matt Hancock, because that speech was all
about creating public trust so that we can develop the beneficial
uses of Artificial Intelligence while avoiding its perils—the
noble Lord, Lord Lucas, put his finger on some of the
issues. That will be important if we are to get acceptance of
this new technology as it develops, particularly as we move from
what might be called weak AI towards strong, general AI. We do
not know what the timescale will be, but it will be particularly
important to create that level of public trust. So it is
extremely important in this context to kick around concepts of
accountability, explanation, transparency, and so on... |