StepFun AI has launched a new open-source text-to-video model named Step-Video-T2V, featuring 30 billion parameters and the capability to generate up to 204 frames per video. This model is released under the MIT license, promoting collaboration within the AI community. It is designed to enhance video generation with high compression, allowing for large-scale creation without significant quality loss. Additionally, the model supports 540p resolution and is trained on thousands of H800s, positioning it competitively against existing models like Meta's MovieGen. The release is part of a broader trend in AI, as various companies unveil advancements in video generation technology, including Topaz Labs' Project Starlight, which focuses on video enhancement using diffusion models.
🔸Video Model Comparison: Image to video 6 Models included: • Pika 2.1 • Adobe Firefly • Runway Gen-3 • Kling 1.6 • Luma Ray2 • Hailuo T2V-01 This time I used an image generated with Magnific's new Fluid model ( Google DeepMind's Imagen + Mystic 2.5 ), and the same… https://t.co/rH1gRbhynB
Open-source video generation model https://t.co/ERPYuXtFNh
Art in Motion with #Ray2 Img-to-Vid. Bring paintings to life and create moving masterpieces. Just drop an image into #DreamMachine and ask it to move. Your gallery in motion awaits. https://t.co/JcGI3dQ5Mu