Asked by
To ask Her Majesty’s Government what assessment they have made of
the calls made at the August meeting of the Group of Governmental
Experts on Lethal Autonomous Weapons Systems at the Convention on
Certain Conventional Weapons for a legally-binding instrument,
including both prohibitions and positive obligations, to regulate
autonomous weapons systems.
The Minister of State, Ministry of Defence () (Con)
My Lords, the UK is an active participant in United Nations
discussions on lethal autonomous weapons systems, working with
partners to build norms to ensure safe and responsible use of
autonomy. The UK and our partners are unconvinced by the calls
for a further binding instrument. International humanitarian law
provides a robust principle-based framework for the regulation of
weapons deployment and use. A focus on effects is most effective
in dealing with complex systems in conflict.
(LD)
My Lords, the Minister’s reply is pretty disappointing. It puts
the Government, despite statements in the integrated review, at
odds with nearly 70 countries and thousands of scientists in
their unwillingness to rule out lethal autonomous weapons. Will
the Minister commit to rethinking government policy in terms of
giving our representatives at the next meeting of the Convention
on Certain Conventional Weapons on 2 December a mandate to go
ahead with negotiations for a legally binding instrument, which,
after all, has been called for by the UN Secretary-General?
(Con)
I am sorry that the noble Lord is disappointed, because I know
the extent of his interest in this issue. I have tried to
facilitate engagement with the department to enable him to better
understand what the department is doing and why we take the views
that we do. He will be aware that international consensus on a
definition of laws has so far proved impossible. At this time,
the UK believes that it is actually more important to understand
the characteristics of systems with autonomy that would or would
not enable them to be used in compliance with IHL, using this to
set our potential norms of use and positive obligations.
(Lab)
My Lords, nations are sleepwalking to disaster. Engineers are
already making autonomous drones the size of my hand that have
cameras that act completely autonomously. They can, for example,
have facial recognition and carry a small shaped charge, and will
kill a person that that facial recognition shows. Once you
release them, you release them and off they go. The firms
producing these are talking in terms of, “Yes, if we had several
thousands of these, gosh how wonderful, because we could kill a
great chunk of a city without damaging it at all and get rid of
the people there.” I find this quite horrifying. Also, these
things are AI: they learn; therefore, they will learn how to kill
even more than they have been programmed to. This is extremely
dangerous. Do the Government agree completely that, wherever
there is a kill-chain that ends up with a dead human being, there
should be a human somewhere in that kill-chain to make that
decision, rather than a robot?
(Con)
All weapon systems, whether with autonomous functions or not,
must fully comply with the principle-based international
humanitarian law framework. A robust application of that
framework, I would suggest, is the best way of ensuring the
lawful and ethical use of force in all circumstances. That
applies to all states that might be developing autonomy in their
weapons systems.
(Con)
Can my noble friend the Minister confirm that the UK has agreed
not to develop autonomous weapons? Of course, we run the risk
sometimes of confusing autonomous weapons with automated weapons,
where there will be a human being in that decision-making cycle.
While some are concerned about the UK’s definition of autonomous
weapons, I think it is quite far-sighted because it will take
into account future developments. Perhaps my noble friend could
offer some clarity as to where in that chain, from targeting to
operating that weapon, there will be human intervention.
(Con)
I thank my noble friend for acknowledging the difficulties that
accompany definitions and prescriptive attempts to define. UK
Armed Forces do not use systems that employ lethal force without
context-appropriate human involvement. This is an important area;
it is clearly an area of evolving policy and it is an area where
we are absolutely clear that the best way forward is to continue
our international engagement with the group of governmental
experts.
(Lab)
Artificial intelligence is clearly an increasing part of the
modern way of warfare but, as we have just heard from the noble
Lord, Lord Lancaster, and my noble friend Lord West, it brings
with it enormous moral challenges. I think what the House wants
to hear is for the Minister to say unequivocally, and as a matter
of principle, that there will always be human oversight when it
comes to the use of artificial intelligence; in particular, that
human oversight is involved whenever there is any decision about
the lethal use of force.
(Con)
It is not possible to transfer accountability to a machine. Human
responsibility for the use of a system to achieve an effect
cannot be removed, irrespective of the level of autonomy in that
system or the use of enabling technologies such as AI.
of Newnham (LD)
My Lords, I have been listening closely to the Minister and I am
still not quite sure whether she has said that the Government
will unequivocally state that no autonomous drone or other AI
could take a life, and that every decision would have to have
human engagement. Can she confirm that that is the case? I
declare an interest as an officer of the APPG on Drones and
Modern Conflict.
(Con)
I simply repeat to the noble Baroness what I said to my noble
friend Lord Lancaster: that UK Armed Forces do not use systems
that employ lethal force without context-appropriate human
involvement.
(Lab)
My Lords, the National AI Strategy was published in September and
promises were made that, before the end of the year,
“details of the approaches the Ministry of Defence will use when
adopting and using AI”
will be published. However, on 22 October the AI strategy for
NATO, which presumably we agreed to, was published and it
emphasised the principles of lawfulness, responsibility and
accountability. Does the Minister not agree that it is now time
for the UK to publicly reaffirm our commitment to ethical AI,
including international law and human rights, and to tell our
public and the international community that our Government are
ready, as our Governments always have been, to show global
leadership on these issues, particularly on lethal autonomous
weapons?
(Con)
The noble Lord is quite correct that the department has said that
it will publish a defence AI strategy. When I was told it would
be in the autumn, I pointed out that the autumn had pretty well
come and gone. I am reassured that significant work has been done
on the strategy and we can expect publication in early course. It
will set out our vision to be the most effective, efficient,
trusted and influential defence organisation of our size, and
have principled components to it. I would not wish to pre-empt
what the strategy will say, but I would hope that it will serve
to answer many of the noble Lord’s questions.
(Con)
My Lords, I declare my technology interests as set out in the
register. Does my noble friend agree that, whether in safety or
security, the public good or economic growth, the UK has a unique
opportunity for the development and deployment of ethical AI?
Further, does she agree that we urgently need public debate and
engagement if we are to achieve, not just in defence but across
all potential applications, optimum outcomes?
(Con)
I say to my noble friend, building on what I have already
indicated to the Chamber, that AI and autonomy clearly have the
potential to transform all aspects of defence, from the back
office to the front line. They are a strategic priority for
defence and we take that evolution of policy seriously. As I
indicated to the noble Lord, , more will be
disclosed when we publish our defence strategy in early course.