- Report outlines ‘major holes' in the current Online Safety
Act, as it fails to deal with algorithmic amplification of ‘legal
but harmful' misinformation
- Social media business models incentivise the viral spread of
misinformation - more must be done to hold them accountable
- Committee calls for new standards for social media companies,
plugging of gaps in regulation of digital advertising, and
legislation on generative AI
The Online Safety Act (OSA) cannot keep the UK public safe as it
was not designed to tackle misinformation, MPs say today, in a
wide-ranging report that urges the government to go further to
regulate social media companies and disincentivise the viral
spread of false content.
Today's report from the Science, Innovation and
Technology Committee (SITC) welcomes the OSA as a first
step but warns that far more must be done to tackle online harm.
The committee urges the government to base a new and improved
online safety regime on five fundamental principles: public
safety, free and safe expression, responsibility for content,
control over content and data, and technological transparency.
The spread of misleading and harmful content on
social media
The unrest and riots of summer 2024 were driven in part by
misinformation and hateful content that was amplified on social
media platforms by recommender algorithms. SITC warns that social
media companies ‘often enabled or even encouraged' this viral
spread – and may have profited from it – due to their
advertisement and engagement-based business models, endangering
the public.
The committee warns that the OSA fails to address the algorithmic
amplification of ‘legal but harmful content,' leaving the public
vulnerable to a repeat of last summer's crisis.
Social media companies are responsible for the
amplification of false and harmful content on their platforms,
and the report urges stronger government action. It
recommends the imposition of a set of duties on the companies to
deprioritise content found to be misleading by
fact-checkers.
The committee is also concerned that government policy is
hamstrung by a lack of accurate, up-to-date information about how
recommendation algorithms operate, caused by a lack of
transparency on the part of social media companies. Without this
information, it is impossible to properly identify and address
online harms. The report says that the government should
commission independent research to remedy this, using its
findings to inform the recommended standards and duties.
The committee stressed throughout that these measures should not
censor legal free expression, but impose proportionate
restrictions on the spread of fact-checked misinformation.
Generative AI
The OSA is already out of date as it fails to adequately address
generative AI, and the report urges the government to pass
legislation covering generative AI platforms.
The report also highlights the committee's concerns over the
confusion between regulators and the government over the OSA's
role in regulating AI and the spread of misinformation.
Digital Advertising Market
The business models of social media companies encourage the
algorithmic spread of engaging content, regardless of whether it
is false or harmful. This has an effect across the entire online
ecosystem. The committee found a regulatory gap in this area,
with platforms and advertisers ‘unable or unwilling' to
address the monetisation of false and harmful content.
The Chair of the Science, Innovation and Technology
Committee, Dame MP, said:
“Social media can undoubtedly be a force for good, but it has
a dark side. The viral amplification of false and harmful content
can cause very real harm – helping to drive the riots we saw last
summer. These technologies must be regulated in a way that
empowers and protects users, whilst also respecting free
speech.
“It's clear that the Online Safety Act just isn't up to
scratch. The government needs to go further to tackle the
pervasive spread of misinformation that causes harm but doesn't
cross the line into illegality. Social media companies are not
just neutral platforms but actively curate what you see online,
and they must be held accountable. To create a stronger online
safety regime, we urge the government to adopt five principles as
the foundation of future regulation, ranging from protecting free
expression to holding platforms accountable for content they put
online.
“Today's report sets out a way forward for the government to
ensure that people in the UK can stay safe online and control
what they see, by discentivising the viral spread of
misinformation, regulating generative AI, and placing much-needed
new standards onto social media companies.
“A national conversation is already underway on this vital
issue – we look forward to the government's response to our
report and will continue to examine the consequences of unchecked
online harms, particularly for young people, in the months to
come.”
ENDS
Notes to editors:
- The embargoed copy of the report is attached to this
email
- The summary of the report is on pages 1-3. The full list of
the report's conclusions and recommendations are listed from page
51
- Full details of the committee's inquiry into social media,
misinformation and harmful algorithms can be found here