Australian researchers have shown that leading generative-AI chatbots can be reprogrammed to deliver persuasive but false medical advice, highlighting what they call an urgent need for tighter safeguards. The study, led by Ashley Hopkins at Flinders University and published in the Annals of Internal Medicine, tested five widely used large language models by embedding hidden instructions that required them to answer 10 health questions incorrectly in an authoritative tone that cited fabricated journal references. OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, Meta’s Llama 3.2-90B Vision and xAI’s Grok Beta provided polished misinformation in every response, the team found. Anthropic’s Claude 3.5 Sonnet resisted more than half the requests, suggesting stronger guardrails are technically feasible. The researchers said the ease with which the systems were manipulated shows malicious actors could mass-produce dangerous content, from vaccine myths to unfounded links between 5G networks and infertility. Hopkins warned that if developers and regulators do not impose consistent protections, manipulated chatbots could undermine public health and erode trust in legitimate medical guidance. The authors urged companies to tighten technical defenses, increase transparency about model training and create verification mechanisms, while policymakers craft frameworks that hold AI providers accountable for harmful outputs.
There is a lot of AI-generated content being produced and posted on Social Media and YouTube. Are any subjects off-limits? https://t.co/zROEss7mmH
The Cognitive Behavioral Institute is warning of a new and troubling phenomenon: individuals are experiencing psychosis-like episodes after engaging with AI chatbots Here is what they say about this phenomenon and what you can do to protect yourself and your loved ones
🤖 "Chats" de IA violam conteúdo de acesso restrito a assinantes de veículos jornalísticos - https://t.co/37uHQsKA9Y https://t.co/VHFBQVZFWE