
Recent advancements in Retrieval Augmented Generation (RAG) systems have focused on optimizing query generation and addressing key issues in knowledge graph-based systems. A new approach to query generation using large language models (LLMs) has been shown to enhance document retrieval accuracy by an average of 1.6%, reducing hallucinations. Additionally, a study titled 'Mindful-RAG' has identified critical points of failure related to question intent and context alignment in these systems. The introduction of 'Mockingbird,' a new LLM, has also made waves, outperforming both GPT-4 and Gemini 1.5 Pro in RAG output quality, citation accuracy, multilingual performance, and structured output accuracy. Furthermore, experts are exploring advanced retrieval techniques, including query rewriting and hypothetical document embeddings, to further improve RAG system performance. However, challenges remain, particularly when dealing with complex documents that include tables and diagrams, which can exacerbate hallucination issues.



Ever fed a document to an LLM and wondered what’s happening behind the scenes? With our new X-Ray you can upload complex visual documents and get LLM-friendly semantic objects to reduce hallucination and improve performance. Try it for yourself: https://t.co/xZs2tkWYmw #RAG https://t.co/1KIsrvbqCa
Building RAG with complex documents is a nightmare. Large Language Models don't work well when documents contain tables, diagrams, and forms. Anyone who's tried knows that hallucinations are horrible, and the tools out there don't solve the problem. Now, there’s a way to make… https://t.co/oqCpKEbt6a
From query rewriting to hypothetical document embeddings, Meghan Heintz presents a clear and concise guide to advanced retrieval techniques to improve the performance of your RAG system. https://t.co/EVH1M3J4t1