Alibaba's Tongyi Lab has released Wan2.2, the world's first open-source Mixture-of-Experts (MoE) architecture video generation model that offers cinematic control over lighting, color, camera, and composition. Wan2.2 supports multiple variants including text-to-video, image-to-video, and unified generation, enabling enhanced motion complexity without increasing computational costs. The model is designed to deliver film-like quality and precise motion control, running efficiently on a single NVIDIA RTX 4090-sized GPU. This development marks a notable advancement in China's open-source AI initiatives, providing creators with comprehensive control over video generation. Wan2.2 is also integrated into platforms such as the Freepik AI Suite, expanding accessibility for users seeking high-quality video content creation.
This is next-level! @azed_ai just showed how easily creativity meets technology with @LeonardoAi_'s image-to-video feature. 🎥✨ Draw your idea, and BAM—it’s alive! Incredible results, right? 😮 💡 https://t.co/LLELNvMkE9
This is NEXT-LEVEL! @runwayml just dropped Aleph, and it's redefining how we create and edit videos! 🌟 Unleashing a whole new era of creativity right within your apps and platforms. 🎥 #AI #AIArtCommunity #VideoEditing #AITools https://t.co/OdVeatFDOa
What are the best ai tools for creating clips from a video?