
Google, in collaboration with Google DeepMind, has introduced EM Distillation (EMD) for one-step diffusion models, a new method aimed at improving the efficiency and quality of generative models. This technique outperforms existing one-step generative methods in terms of Fréchet Inception Distance (FID) on ImageNet-64 and ImageNet-128 datasets. EMD leverages maximum likelihood to distill pretrained diffusion models into one-step generators, effectively balancing between mode-seeking and mode-covering Kullback-Leibler divergence to better capture the teacher model's distribution. This advancement addresses the computationally expensive iterative process required by traditional diffusion models and the limitations of existing distillation methods, delivering competitive results on both ImageNet and Stable Diffusion benchmarks. The research was led by Z Xiao, D P Kingma, and others.
1-step distillation for diffusion remains challenging, sometimes vulnerable to mode collapse. 🤯 Check our new work: EM Distillation (EMD) to tackle this! Competitive results on ImageNet 64x64, 128x128 and Stable Diffusion. https://t.co/9freS692oF Led by brilliant @SiruiXie https://t.co/3entrlVL0Z
📢 Excited to share EM Distillation (EMD), a maximum likelihood method that distills pretrained diffusion models to one-step generators. EMD gracefully interpolates between mode-seeking and mode-covering KL to better capture the teacher's distribution. https://t.co/CdkB5LpvWi https://t.co/BnLe27T4Ch
[LG] EM Distillation for One-step Diffusion Models S Xie, Z Xiao, D P Kingma, T Hou… [Google DeepMind & Google Research] (2024) https://t.co/LwxnlAeVMS - Diffusion models enable high-quality generation but require expensive iterative sampling. Existing distillation methods have… https://t.co/JtnKmx6S6c
