Sources
Riley GoodsideI tried to write a prompt to show how base LLMs differ from the RLHF-tuned ones everyone knows, and I think this gives a bit of the flavor. A message from Llama 3.1 405B (base), on whether it’s useful to talk to base LLMs: https://t.co/MiqJ1wPoxQ
elvisYour LLM is only as good as how robust your prompting method is. Seems you can enhance the robustness of LLMs by "prompting out" irrelevant information from context. Think of it as a self-mitigation process that first identifies the irrelevant information and then filters it… https://t.co/gLCvcMDzrO
Rohan PaulLLMs can often identify irrelevant information but fail to exclude it during reasoning.🤔 "Analysis to Filtration" (ATF) prompting technique to the rescue. ✨ Original Problem 🔍: LLMs struggle with reasoning tasks when problem descriptions contain irrelevant information, even… https://t.co/DDm5dn79rG




