Researchers from Cornell Tech and Stanford University have introduced a new model called Block Discrete Denoising Diffusion Language Models (BD3-LMs). This innovative approach combines autoregressive and diffusion models to enhance the efficiency and scalability of text generation. BD3-LMs generate blocks of words simultaneously, which may improve performance compared to traditional methods that predict words sequentially. The introduction of BD3-LMs aims to address limitations in existing language models, particularly in maintaining fluency and semantic coherence while generating text. The study also explores the potential of diffusion-based models in achieving high constraint fidelity in text generation tasks.
[CL] Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning B Jin, H Zeng, Z Yue, D Wang... [University of Illinois at Urbana-Champaign & University of Massachusetts Amherst] (2025) https://t.co/4FQ8pMi9zi https://t.co/YgLRoBPGSb
[LG] Ideas in Inference-time Scaling can Benefit Generative Pre-training Algorithms J Song, L Zhou [Luma AI] (2025) https://t.co/U7MdzmRqi0 https://t.co/yamu2Rwqp0
[LG] Inductive Moment Matching L Zhou, S Ermon, J Song [Luma AI & Stanford University] (2025) https://t.co/N0cjVNLSRK https://t.co/8bLGcOVsKO