Online Safety
Bill
The purpose of the Bill is to:
● Deliver the manifesto commitment to make the UK the safest
place in the world to be online by improving protections for
users, especially children, whilst protecting freedom of
expression.
The main benefits of the Bill would be:
-
● Preventing online fraud and scams by requiring large
social media platforms and search engines to prevent the
hosting or publication of fraudulent paid-for advertising.
-
● Tackling the most serious illegal content, including
child sexual exploitation and abuse.
-
● Ensuring communication offences reflect the modern
world, with updated laws to tackle threatening communication
online as well as criminalising cyberflashing.
-
● Safeguarding freedom of expression. Tech companies
will no longer be able to arbitrarily remove content. If
users feel like they have been treated unfairly, they will
have the right to appeal. And journalistic and
democratically-important content will also be protected from
arbitrary removal.
-
● Restoring public trust by making companies
responsible for their users’ safety online, whilst supporting
a thriving and fast-growing digital sector.
The main elements of the Bill are:
-
● Introducing a duty of care on online companies,
making them responsible for protecting users and tackling
illegal content. This will create safeguards and standards so
that users know when and how companies are using tools to
identify illegal content and to stop harmful material being
viewed by children.
-
● Empowering users by ensuring the largest platforms
give users tools to exercise greater control over the types
of people and content they interact with.
-
● Protections for democratic and journalistic content.
The Bill sets a higher bar for the removal of content that
contributes to democratic political debate, and all
‘recognised news publishers’ will be exempt from the Bill’s
safety duties (for both children and adults).
-
● Requiring providers who publish pornographic content
on their services to prevent children from accessing that
content, and for the largest platforms to put in place
proportionate systems and processes to prevent fraudulent
adverts being published or hosted on their service.
-
● Ensuring the big social media companies keep their
promises to users by enforcing their terms and conditions
consistently. Requiring platforms to have effective and
accessible user reporting and redress mechanisms to report
concerns about harmful content, and challenge infringement of
rights (such as wrongful takedown).
-
● Designating Ofcom as the independent online safety
regulator and giving it robust enforcement powers to uphold
the regulation. This will include fines of up to £18 million
or ten per cent of qualifying annual global turnover –
whichever is greater – as well as business disruption
measures, making them less commercially viable in the UK.
Senior managers of tech firms can be held criminally liable
if they fail to comply with information requests from the
regulator.
Territorial extent and application
● The Bill will extend and apply across the UK.
Key facts
-
● In 2020, adult internet users in the UK spent an
average of three hours and 37 minutes online each day, up by
nine minutes compared to 2019. However, over 80 per cent of
UK adults expressed a concern about going online in 2020.
-
● In a month-long period during 2020, the Internet
Watch Foundation and its partners blocked at least 8.8
million attempts by UK internet users to access child sexual
abuse material online.
-
● During COVID-19 lockdowns, research by YouGov showed
that 47 per cent of children and teens had seen content they
would rather avoid, leaving them feeling uncomfortable (29
per cent), scared (23 per cent), and confused (19 per cent).
One in seven (13 per cent) were exposed to harmful content on
a daily basis.