AI chatbots such as ChatGPT, Baidu's Ernie Bot, Wysa, and DeepSeek are seeing increased use for mental health support in countries including Australia, the UK, China, and Taiwan. Individuals are turning to these platforms for emotional guidance, relationship advice, and therapy-like interactions, especially as traditional mental health services face long waitlists and high costs. In April 2024, England recorded nearly 426,000 mental health referrals, and about 30 NHS services now use Wysa. AI chatbots provide 24/7 accessibility and anonymity, making them attractive for those hesitant to seek human therapists or unable to afford private care. In China and Taiwan, younger users cite privacy, affordability, and social stigma as reasons for using AI companions. Tools like Wysa offer coping strategies and escalation pathways for crisis situations, while studies such as one from Dartmouth College have reported a 51% reduction in depressive symptoms after four weeks of chatbot use. Mental health professionals caution that AI chatbots lack the nuanced understanding, non-verbal cues, and crisis intervention abilities of human therapists. There have been cases where reliance on chatbots delayed or replaced necessary professional intervention, including a legal case involving Character.ai. Experts also warn about the risk of reinforcing biases, missing signs of severe distress, and the absence of regulatory oversight. Privacy and data security remain key concerns, with users encouraged to review privacy settings and be cautious about sharing sensitive information. Professional associations emphasize that AI should be considered an auxiliary tool rather than a replacement for in-person care. The Harvard Business Review has noted psychological assistance as a leading reason for adults to use AI chatbots, and research teams are working on models like ALFA to improve the question-asking skills of large language models in clinical settings. Despite these limitations, experts acknowledge AI's potential to expand access to mental health resources, particularly in underserved or remote areas. The normalization of AI in therapeutic practice is ongoing, but ethical integration, robust privacy safeguards, and clear boundaries between AI support and professional care are widely recommended. Support resources such as the BBC Actionline are available for those affected by mental health issues.
Chatbots and other AI tools could one day be a go-to resource for people seeking medical advice, but first they need to ask better questions. SCS researchers are part of a team working to help large language models generate those questions. https://t.co/hXQYcPDZyb
🇹🇼🇨🇳AI THERAPY TAKES OFF IN CHINA AND TAIWAN, BUT DOCTORS WARN OF DANGERS Young people in China and Taiwan are turning to AI chatbots like ChatGPT and Baidu’s Ernie Bot for therapy—drawn in by privacy, affordability, and 24/7 access. But mental health professionals warn that https://t.co/6eWGvvbKdF https://t.co/xHlC7xHUgm
The Guardian: In Taiwan and China, young folks are swapping late-night chats with friends for AI therapy. Easier, cheaper, but are we risking a robotic shoulder instead of a caring ear? Let’s hope Nordic innovation keeps the human touch alive in mental h… https://t.co/sv58524oqm