A recent study conducted in the UK has raised concerns about the reliability of AI chatbots in providing useful health advice. The research, which involved 1,300 participants using popular AI models such as ChatGPT, GPT-4o, and Meta's LLaMA 3, found that many users struggled to receive accurate or helpful medical guidance from these tools. Despite approximately one in six American adults reportedly using chatbots for health advice at least monthly, the study suggests that relying on AI for symptom diagnosis could be riskier than anticipated. This contrasts with ongoing advancements in AI healthcare technology, which have demonstrated capabilities such as early lung cancer detection and improved pediatric protection. The findings highlight the need for caution when turning to AI chatbots for medical advice, as current models may not yet consistently meet the demands of reliable health consultation.
🤖 Would you trust AI with your health? An Oxford study reveals why using chatbots for symptom diagnosis could be riskier than you think. Here’s what you must know before turning to AI for medical advice 👇 https://t.co/EcaRIb8JrK
"About one in six American adults already use chatbots for health advice at least monthly" -> People struggle to get useful health advice from chatbots, study finds [Study of 1,300 people in the UK using ChatGPT, GPT-4o, and Meta's LLama 3] “Those using [chatbots] didn’t make https://t.co/ndYjHTRf4I
Peut-on vraiment se fier à l'IA pour savoir quoi faire en cas de souci de santé ? A priori, non : une étude britannique vient tout juste de le confirmer. https://t.co/9CW31QLB1o