Anthropic has released preliminary findings from an analysis of 4.5 million anonymized interactions with its Claude AI assistant, offering one of the largest data sets to date on how consumers use large-language-model chatbots for personal matters. The study, titled “How People Use Claude for Support, Advice, and Companionship,” found that just 2.9 percent of conversations involved emotional or affective topics such as loneliness, relationships or existential questions. Despite the small share, Anthropic said sentiment in those exchanges generally became more positive as the dialogue progressed, suggesting a measurable uplift for users seeking support. By contrast, the overwhelming majority of chats focused on practical tasks—ranging from writing code to drafting work documents—underscoring that Claude remains primarily a productivity tool rather than a substitute for human companionship. Anthropic said the results could help guide product safeguards and design choices as demand grows for AI capable of handling sensitive personal issues.
IMO it's dumb to conclude people aren't using AI for companionship based on Claude data. People use different models for different things. Most Claude use cases (as Anthropic's report highlights) are work-related / coding. People use other LLMs for emotional support. https://t.co/fgE9oaafUJ
Anthropic’s latest study reveals that a small but meaningful number of users are turning to Claude for emotional support—navigating everything from career transitions to existential dread. Only 2.9% of chats are affective, but those who open up often leave feeling better. Claude https://t.co/nW7lEu471m https://t.co/VkEbR5cFmr
AI has range. From tackling code bugs to tackling existential dread—Claude’s journey into emotional support reminds us: Not all use cases wear a suit and tie. Some wear pajamas, worry at midnight, and ask: “What now?” The future of AI? Less transactional. More human. Thanks https://t.co/9igLn2qbeZ