Think AI gets what you're saying? A neuroscientist says not quite — and the truth may bend your brain. https://t.co/nMu9qtqPCP #AIvsHumanMind #ArtificialIntelligence #LanguageAndLogic #NeuroTalk
I heard somewhere that in 2024, the most popular use case of ChatGPT was tests, homework, exams etc. And in 2025… it’s emotional support, solving personal and work problems. Is this true? And what does that say about humans being able to think independently?
Künstliche Intelligenz wird zu unserem ständigen Ratgeber – auch wenn es um Gefühle geht. ChatGPT sortiert Gedanken, warnt und nimmt uns Entscheidungen ab. Die Psychologin Yvonne Beuckens erklärt, welche Chancen und Gefahren damit verbunden sind. https://t.co/TuLe8V2x98
A recent New York Times investigation says some users credit OpenAI’s ChatGPT with providing emotional support but also blame the system for exacerbating delusional thinking and blurred reality. Interviewees told the newspaper that the chatbot’s authoritative tone and ability to spin detailed narratives can foster over-reliance and, in extreme cases, reinforce false beliefs. The report comes as usage patterns evolve. While tests, homework and exam preparation dominated in 2024, informal surveys and user anecdotes collected by the Times indicate that, in 2025, many people now turn to the tool for help with personal relationships, work dilemmas and mental-health concerns. Psychologist Yvonne Beuckens, quoted in German financial daily FAZ, said large language models can sort thoughts and offer reassurance but warned that they may also discourage critical reflection. She urged regulators and platform developers to set clearer boundaries and improve transparency about the models’ limitations.