
Recent advancements in language model (LM) technology have introduced innovative prompting strategies aimed at enhancing the reasoning and generation capabilities of these models. A notable development is the Retrieval Augmented Thoughts (RAT) strategy, which iteratively revises the chain-of-thought reasoning of large language models (LLMs) using retrieved information. This technique has shown significant improvements in long-horizon generation tasks by incorporating context-aware reasoning. Additionally, the Graph of Thoughts (GoT) paper highlights the utility of aggregating multiple thoughts from different reasoning paths, indicating a trend towards more sophisticated prompting methods. The prompt expansion approach, pioneered by HyperWriteAI, has been adopted by major players, including in the development of image models like DALL-E 3 and Ideogram 1.0, marking a step-change in output quality. Furthermore, a new paper introduces an unsupervised consistency training method that reduces misleading explanations caused by unverbalized biases in CoT prompting, enhancing the reliability of LLMs.
🚀New paper!🚀 Chain-of-thought (CoT) prompting can give misleading explanations of an LLM's reasoning, due to the influence of unverbalized biases. We introduce a simple unsupervised consistency training method that dramatically reduces this, even on held-out forms of bias. 🧵 https://t.co/LIxyqPLg9v
Retrieval Augmented Thoughts Shows that iteratively revising a chain of thoughts with information retrieval can significantly improve LLM reasoning and generation in long-horizon generation tasks. The key idea is that each thought step is revised with relevant retrieved… https://t.co/yZm3QISsiT
Really cool to see the prompt expansion approach we @HyperWriteAI pioneered years ago being adopted by major players, leading to a step-change in output quality for image models. Today, this technique is used in DALL-E 3, Ideogram 1.0, etc. https://t.co/ZqGtCjooYM
