Recent advancements in large language models (LLMs) have been highlighted through several new research papers. One study, titled 'Align-SLM: Textless Spoken Language Models with Reinforcement Learning from AI Feedback,' focuses on improving spoken language processing without relying on text. Another paper, 'SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models,' aims to enhance the factual accuracy of LLMs, with contributions from researchers at Duke University and Google Research. Additionally, the 'FactTest' framework has been introduced to provide statistical guarantees for factuality testing in LLMs, with multiple iterations emphasizing finite-sample and distribution-free guarantees. These studies collectively represent significant steps toward improving the reliability and performance of AI-driven language models.
FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees https://t.co/Tc7Docf2y0
[CL] SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models J Zhang, D Juan, C Rashtchian, C Ferng… [Duke University & Google Research] (2024) https://t.co/kacZLl1U6y https://t.co/SZSAosjqgS
FactTest: Factuality Testing in Large Language Models with Statistical Guarantees. https://t.co/bnsc9i2PzF