Sand AI has released MAGI-1, an autoregressive diffusion video model with 24 billion parameters, available under the Apache 2.0 open-source license. MAGI-1 delivers strong temporal consistency and benchmark-leading performance in video generation. It supports text-to-video, image-to-video, and video-to-video capabilities, enabling high-fidelity, long-form, instruction-following video creation. The model is accessible on Hugging Face and includes distilled and quantized versions for efficiency. Concurrently, OpenAI's image generation model has been integrated into Higgsfield, offering over 30 viral style presets such as Ghibli and Pixar, allowing users to restyle and animate images quickly. Additionally, Step1X-Edit, a new practical framework for general image editing, has been launched on Hugging Face, outperforming existing open-source baselines and approaching the performance of GPT4o and Gemini Flash.
Step1X-Edit just dropped on Hugging Face A Practical Framework for General Image Editing https://t.co/SnTB4AtynD
Step1X-Edit: A Practical Framework for General Image Editing - Outperforms existing open-source baselines by a substantial margin - Approaches the performance of GPT4o and Gemini Flash https://t.co/hGYnx5Qlc6
Turn any image into your favorite cartoon style and animate it in seconds. Now possible with our new OpenAI-powered style presets. More examples in the comments. 🧩 1/n https://t.co/bBTLKznEey