The University of Hong Kong has released Dream 7B, an open-source diffusion reasoning model that significantly outperforms existing diffusion language models. This model allows users to adjust the number of diffusion timesteps, balancing speed and accuracy. Additionally, a demo has been created to allow users to test the model and observe the diffusion process in real-time. The release has garnered attention for its innovative approach, including features like vectorized diffusion that can generate novel unseen scenes, as noted by researchers at CVPR 2025.
Dream, the diffusion-based LLM is out. I think it will be pretty fun to thinker with it! https://t.co/ZteRE1LpRO https://t.co/4NQ30mQVfa
The Dream 7B (diffusion reasoning language model) is OUT! 🚨 I built a demo so you can test it out (and check the diffusion process live) 𖣯🔍 https://t.co/p1LN45VEaC https://t.co/u1TLkJb5fm
Vectorized Diffusion without Rasterized Encodings! Scenario Dreamer (CVPR 2025) directly operates on vectorized scene elements to generate novel unseen scenes - making fully data-driven closed-loop generative driving simulation possible. We trained a novel vectorized latent https://t.co/VhAEbkMSFE