OpenAI has introduced a new model called Consistency Models (sCM) that significantly accelerates the process of generating high-quality images. Unlike traditional diffusion models, which require 50-100 steps, the sCM can produce comparable quality images in just 2 steps, achieving a 50x speedup. This advancement simplifies, stabilizes, and scales continuous-time consistency models, making them more robust and scalable for various real-world applications, including real-time AI generation across domains like image, audio, and video media. The new model, which scales up to 1.5 billion parameters, aims to enhance the efficiency and practicality of AI models for diverse applications, achieving image generation in just 0.11 seconds and suitable for real-time rendering.
Stable Diffusion 3.5 Large is now on Poe! Stability AI's most powerful model yet, this image generator delivers superior image quality, prompt adherence, and output diversity compared to its predecessor. (1/2) https://t.co/F76WVBdECC
Simplifying, stabilizing & scaling continuous-time consistency models By Cheng Lu & Yang Song at OpenAI Summary This paper presents a method for simplifying, stabilizing, and scaling up the training of continuous-time consistency models (CMs), a powerful class of generative… https://t.co/2yxdolVUtC
Created with Stable Diffusion 3.5 by @StabilityAI on @MageSpace_ https://t.co/BJ0v9auSiC