#AI Agents, Automaticity, and Value Alignment in #Health Care | @NEJM_AI https://t.co/JMEYQmJoSM #MedTwitter #AImedicine #AI #HealthAI #AIinHealthcare #MedTech
#FDA’s #artificialintelligence is supposed to revolutionize #drug approvals. It’s making up nonexistent studies https://t.co/trjU51oHBx #MedTwitter #AImedicine #AI #HealthAI #AIinHealthcare #MedTech
#AI is already touching nearly every corner of the #medical field | @FortuneMagazine https://t.co/co49kbWzpN #MedTwitter #AImedicine #AI #HealthAI #AIinHealthcare #MedTech
The US Food and Drug Administration’s new artificial-intelligence assistant, known as Elsa, is producing fabricated research citations and misrepresenting scientific data, according to a CNN investigation that interviewed six current and former agency staff and reviewed internal documents. Review scientists said the system has quoted studies that do not exist, returned incorrect drug-label counts and lacks access to key industry submissions, forcing employees to double-check its output. FDA officials acknowledge the shortcomings. Jeremy Walsh, the agency’s head of AI, told CNN that hallucinations are a risk shared by many large language models and said upgrades allowing users to upload vetted documents would roll out “in the coming weeks.” Commissioner Dr. Marty Makary said use of Elsa is voluntary, while maintaining that the tool can accelerate tasks such as meeting summaries and protocol reviews. Elsa was deployed across the FDA on 2 June 2025 after a brief pilot that leadership described as “ahead of schedule and under budget,” costing about $12,000 in its first four weeks. Health and Human Services Secretary Robert F. Kennedy Jr. has promoted the project as evidence that an “AI revolution” is streamlining drug approvals. Staff interviewed by CNN said the need to verify Elsa’s work is eroding any promised efficiency gains. The revelations land as Congress debates broader rules for high-risk medical AI and as the Trump administration pushes federal agencies to embrace the technology. With no binding regulatory framework in place, the reliability concerns surrounding Elsa highlight the stakes of using generative AI in decisions that can determine whether new medicines reach the US market.