LLMs can handle long documents, but we can’t always trust them! 😬 In our #EMNLP2024 paper, we investigate attribution and abstaining as two strategies to increase trust - learn more in this 🧵 (1/11). #NLProc 📰 https://t.co/joimYgNwOQ https://t.co/tSWOZm6x82
Is your LLM lying because it's clueless or just being silly? WACK knows! WACK helps distinguish between LLM hallucinations caused by ignorance versus computational errors This method separates knowledge-based and processing-based hallucinations in LLMs 🤔 Original Problem:… https://t.co/OY4pUi8JqB
🧠 Breaking Research! 🧠 Solving the LLM "Goldilocks Problem" Introducing Auto-CEI: A breakthrough training method that helps train LLMs find the “sweet spot” between overconfident (plausible but incorrect) hallucinations and overcautious (“I don’t know”) refusals. 🔗 Full… https://t.co/XzAA6dGXsN
Recent research highlights advancements in the development and understanding of large language models (LLMs), addressing their limitations and potential improvements. A paper from Narrative Bi outlines a method to reduce LLM hallucinations by tenfold through four strategies: structured output, strict rules, enhanced prompts, and semantic layers. Another study explores the ability of LLMs to differentiate between fact, belief, and knowledge, revealing significant limitations that could impact their use in sectors such as healthcare, law, journalism, and education. Additionally, researchers have identified two types of hallucinations in LLMs—those stemming from ignorance and those occurring despite knowledge. A new training method called Auto-CEI aims to balance between overconfident and overly cautious responses from LLMs, while the WACK method distinguishes between hallucinations caused by ignorance versus computational errors. These findings underscore the ongoing efforts to enhance the reliability and accuracy of LLMs in various applications.