
Recent research introduces innovative methods to enhance the reasoning and generation capabilities of Large Language Models (LLMs). The Retrieval Augmented Thoughts (RAT) approach involves iteratively revising a chain of thoughts with retrieved information to improve LLM performance in long-horizon tasks. Additionally, a new Bias-Augmented Consistency Training method aims to reduce biased reasoning in AI models, enhancing explainability and reducing biases.



TRAD: Enhancing LLM Agents with Step-Wise Thought Retrieval and Aligned Decision https://t.co/2KuLwNojsk
New study introduces Bias-Augmented Consistency Training to curb biased reasoning in AI, enhancing model explainability with promising results in consistency & bias reduction: https://t.co/RZu54ZsVzo https://t.co/Por8QH1RLR
🚀New paper!🚀 Chain-of-thought (CoT) prompting can give misleading explanations of an LLM's reasoning, due to the influence of unverbalized biases. We introduce a simple unsupervised consistency training method that dramatically reduces this, even on held-out forms of bias. 🧵 https://t.co/LIxyqPLg9v