
Recent advancements in Large Language Models (LLMs) have introduced innovative methodologies such as RAFT (Retrieval Augmented Fine Tuning) and Adaptive-RAG (Retrieval-Augmented Generation), aimed at enhancing domain-specific question-and-answer capabilities and the adaptability of retrieval-augmented strategies based on query complexity. RAFT, a collaboration between Microsoft and UC Berkley, focuses on refining LLMs' use of context and disregarding distractors, reportedly outperforming standard fine-tuning techniques. Meanwhile, Adaptive-RAG dynamically selects the most suitable retrieval-augmented strategy, balancing between iterative and single-step retrieval augmentation approaches. These developments promise to elevate the performance, accuracy, and verifiability of LLMs, particularly in specialized fields like biomedicine and coding, and are being explored for applications in AI coding assistants and truthfulness enhancement in RAG outputs. Notably, the decentralized Retrieval Augmented Generation (dRAG) on origin_trail, and efforts to make RAG applications more robust with RAFT, involving @AIatMeta Llama 7B and @OpenAI GPT-3.5, are part of these advancements. An article by Marlon Hamm also explores methods to enhance the truthfulness of RAG application outputs.
"This article explores methods to enhance the truthfulness of Retrieval Augmented Generation (RAG) application outputs, focusing on mitigating issues like hallucinations and reliance on pre-trained knowledge." by Marlon Hamm https://t.co/jLsqdABXRP
Can we make RAG applications more robust with fine-tuning? A paper by @Microsoft and UC Berkley put this to the test to see if small open LLMs, like @AIatMeta Llama 7B, can match @OpenAI GPT-3.5. They called it “Retrieval Augmented Fine Tuning (RAFT)”, where you train an LLM… https://t.co/AX1uhzydyq
[CL] Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity https://t.co/xNJM3pZczQ - Adaptive Retrieval-Augmented Large Language Models (LLMs) can balance between iterative and single-step retrieval augmentation approaches… https://t.co/qW5mhYGLoe














