
A new study explores the impact of fine-tuning Large Language Models (LLMs) on hallucinations. The study suggests that integrating new factual knowledge through fine-tuning can make LLMs more prone to hallucinations, posing challenges for reliable AI development.
Only people who have never built anything serious with LLMs think that better prompts solve hallucinations.
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? New preprint!📣 - LLMs struggle to integrate new factual knowledge through fine-tuning - As the model eventually learns new knowledge, it becomes more prone to hallucinations😵💫 📜https://t.co/vvE3akrxas 🧵1/12👇 https://t.co/Zqm0EHTxxG
One of the reasons we think about building reliable AI at Normal Computing is that hallucinations are very common in large language models (LLMs). What seems obvious to us may not be obvious to the machine nor identified before a hallucinated output is acted upon. Unfortunately,…
