Implications of artificial intelligence (AI)
for cyber security and for military and defence purposes to be
explored in two Lords Artificial Intelligence Committee evidence
sessions.
The Committee will take evidence on whether AI
facilitates new kinds of cyberattacks; and whether it is only
state-sponsored hackers who have the means to deploy artificial
intelligence or not.
The first session will also be asking witnesses on
the UK’s capability to protect against the impact of AI on cyber
security, and whether the law is sufficient to prosecute those
who misuse AI for criminal purposes.
The second session will consider the current and
potential military and defence applications of AI, including what
degree of direct human control over the deployment and targeting
of a weapon system should be required; and whether a ban on
lethal autonomous weapons is needed.
These are among the key issues the House of Lords
Select Committee on Artificial Intelligence will be asking two
panels of witnesses on Tuesday 28 November 2017.
The first panel will be at 3.30pm and the Committee
will hear from:
-
Dr Mark Briers, Strategic Programme
Director for Defence and Security, The Alan Turing
Institute
The Defence and Security Programme is a collaboration of
the Ministry of Defence (Defence Science and Technology
Laboratory and Joint Forces Command), GCHQ and The Alan Turing
Institute to deliver data science research.
-
Professor Christopher Hankin, Director,
Institute for Security Science and Technology, Imperial College
London
Professor Hankin leads projects focussed on developing
advanced visual analytics and providing better decision support
to defend against cyberattacks. He is Chair of the Academic
Resilience and Security Community and a member of the
Cybersecurity Advisory Panel of the Global Cyber Security
Capacity Centre.
Questions the Committee is likely to ask
include:
-
Which type of applications of AI are more
vulnerable to cyberattacks?
-
Are AI researchers aware of how their research
might be misused?
-
Should adversarial AI attacks be taken into account
when developing new AI applications? Will mandatory regimes of
stress-tested or penetration testing be required? Once the
General Data Protection Regulation and Data Protection Bill
have come into force, will the law be able to adequately
prosecute those who use misuse AI for criminal
purposes?
The second panel will be at 4.30pm and the Committee
will hear from:
-
Professor Noel Sharkey, Emeritus Professor
of AI and Robotics and Professor of Public Engagement,
University of Sheffield
Professor Sharkey is best known as the chief judge on BBC
TV’s Robot Wars programme. He is also Co-Director of the
Foundation for Responsible Robotics and Principal Spokesperson
for the Campaign to Stop Killer Robots.
-
Major Kitty McKendrick, Visiting Fellow,
Chatham House
Major McKendrick is a British Army Officer. Her research
areas of interest include defence, security and the application
of artificial intelligence in military operations. Chatham
House, the Royal Institute of International Affairs, is an
independent policy institute based in London. She will be
speaking to the Committee in her capacity as an independent
researcher.
-
Dr Alvin Wilby, Vice-President Research,
Technical and Innovation, Thales Group
Thales designs and builds electrical systems and provides
services for the aerospace, defence, transportation and
security markets. It is the 10th largest defence contractor in
the world, with 55 per cent of its total sales coming from
military sales. In October 2017, Thales announced
the creation of the Centre of Research and Technology in
Artificial Intelligence eXpertise (cortAIx) in
Montreal.
Questions the Committee is likely to ask
include:
-
How might the use of AI in military applications
change the nature of warfare?
-
Are non-state actors or rogue states like to use
lethal autonomous or semi-autonomous AI
applications?
-
Can civilian AI applications be easily converted to
military or offensive purposes?
-
What ethical principles should companies developing
AI systems for military and security applications
use?
-
Should the UK Government consider restricting the
sale or export of particular AI systems for military
applications?
These evidence sessions will take place at
3.30pm on Tuesday 28 November 2017 in Committee Room 4A of the
House of Lords.