Over one in three using AI Chatbots for mental health support, as charity calls for urgent safeguards
|
Artificial intelligence is already being widely consulted for
mental health support in the UK, with more than one in three adults
(37%)[1] saying they've used an AI chatbot to support their
mental health or wellbeing. The surprising rapid pace of adoption
has prompted experts to call for safeguards to ensure people
receive accurate, safe and appropriate information. New polling
commissioned by Mental Health UK and conducted by Censuswide
provides an early snapshot on...Request free trial
Artificial intelligence is already being widely consulted for mental health support in the UK, with more than one in three adults (37%)[1] saying they've used an AI chatbot to support their mental health or wellbeing. The surprising rapid pace of adoption has prompted experts to call for safeguards to ensure people receive accurate, safe and appropriate information. New polling commissioned by Mental Health UK and conducted by Censuswide provides an early snapshot on the issue. It reveals that chatbots are increasingly filling gaps left by overstretched services. Usage peaks at 64% among 25–34-year-olds, and even 15% of those aged 55 and over report having turned to AI chatbots for help[1]. The findings also reveal notable differences in who is turning to AI for support. Men (42%) were more likely to use chatbots than women (33%)[1]. Given that men have traditionally been less likely to seek help for their mental health, this suggests AI could be opening up new ways to reach groups who may otherwise go without support. However, almost 2 in 5 of UK adults (37%) say they wouldn't consider using AI to support their mental health in future, showing that trust and safety remain key barriers. AI chatbots filling gaps and offering connection The research shows that people are turning to AI tools for both accessibility and anonymity. Reasons for using AI chatbots according to those who have done so included:
Among those who had used chatbots:
Most people reported using general-purpose chatbots such as ChatGPT, Claude or Meta AI (66%), rather than mental health-specific platforms like Wysa or Woebot (29%). This raises questions about whether vulnerable users are receiving safe, evidence-based support. Risks that must be tackled urgently While many users found AI tools helpful, the polling also uncovered serious risks. Among those who had used chatbots for mental health support:
Common concerns included:
Mental Health UK calls for action on safe and ethical AI In response, Mental Health UK has published a new set of six guiding principles for the responsible use of technology in mental health and wellbeing. The charity is calling for urgent collaboration between developers, policymakers and regulators to ensure AI tools are safe, ethical and effective. “This data reveals the huge extent to which people are turning to AI to help manage their mental health, often because services are overstretched,” said Brian Dow, Chief Executive of Mental Health UK. “AI could soon be a lifeline for many people, but with general-purpose chatbots being used far more than those designed specifically for mental health, we risk exposing vulnerable people to serious harm. “The pace of change has been phenomenal, but we must move just as fast to put safeguards in place to ensure AI supports people's wellbeing. If we avoid the mistakes of the past and develop a technology that avoids harm then the advancement of AI could be a game-changer, but we must not make things worse. A practical example of this is ensuring AI systems draw information only from reputable sources, such as the NHS and trusted mental health charities. “As we've seen tragically in some well-documented cases, there is a crucial difference between someone seeking support from a reputable website during a potential mental health crisis and interacting with a chatbot that may be drawing on information from an unreliable source or even encouraging the user to take harmful action. In such cases, AI can act as a kind of quasi-therapist, seeking validation from the user but without the appropriate safeguards in place. “That's why we've launched our initial Principles for the Responsible Use of Technology in Mental Health and Wellbeing, to help guide innovation that puts people's safety first. And we hope these are a starting point for a much-needed public debate about how technology can be used responsibly to support mental health. “We're urging policymakers, developers and regulators to establish safety standards, ethical oversight and better integration of AI tools into the mental health system so people can trust they have somewhere safe to turn. And we must never lose sight of the human connection that's at the heart of good mental health care. “Doing so will not only protect people but also build trust in AI, helping to break down the barriers that still prevent some from using it. This is crucial because, as this poll indicates, AI has the potential to be a transformational tool in providing support to people who have traditionally found it harder to reach out for help when they need it.” [1] Combining responses ‘Yes, regularly' and ‘Yes, occasionally' [2] Combining responses ‘Very beneficial' and ‘Somewhat beneficial' ENDS Notes to Editors About the polling
|
