
Recent advancements in the field of Large Language Models (LLMs) have focused on addressing the issue of hallucinations, where models generate information not grounded in their training data. Lilian Weng has released a technical blog on 'Extrinsic Hallucinations in LLMs,' which discusses sources, detection, and mitigation strategies for these hallucinations. The Lookback Lens method, developed by Yung-Sung Chuang and colleagues, uses attention maps to detect and mitigate contextual hallucinations. Additionally, NVIDIA's Weight-Decomposed Low-Rank Adaptation (DoRA) has shown to outperform existing methods in fine-tuning LLMs and Vision Language Models (VLMs). Another notable technique is DoLa, which focuses on the last layers of a model to prioritize factually correct tokens. These advancements highlight the ongoing efforts to improve the reliability and accuracy of LLM outputs. A discussion on these topics is scheduled for 8 PM IST/7:30 AM PST. The new method has shown a 10% reduction in contextual hallucinations in the XSum summarization task.









Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps https://t.co/kpgaw9eKAL
Beyond Accuracy to Understanding: Leveraging Stanford's Hallucination Framework > Many thanks to Tycho Orton for this think piece about the fallout and implications of the Stanford University study into genAI tools. #legaltech #genAI #lawtwitter https://t.co/qF1zupeJgO
🤖 From this week's issue: Lilian Weng’s comprehensive post highlighting the causal relationships, detection techniques, and mitigation measures for extrinsic hallucinations. https://t.co/sld4c7gG8e