Alibaba has released open-weights for its new Wan 2.2 text- and image-to-video model, making 14 billion of the system’s 27 billion parameters freely available under an Apache 2.0 licence. Independent benchmarks circulated by researchers describe the model’s output quality as comparable with Google’s Veo 2 and ByteDance’s Kling 2.0, but at a fraction of the computational cost. Commercial availability followed within hours. Freepik added Wan 2.2 to its AI Suite, while cloud host Replicate began charging US$0.05 for a 480p clip and US$0.10 for 720p output, marking one of the lowest public price points for high-fidelity video generation. Separately, Beijing-based MiniMax introduced Hailuo 02 Fast, an accelerated variant of its flagship video generator. Partners including Replicate, FAL and Higgsfield are offering 6- or 10-second 512p clips for US$0.10, with typical render times of under one minute. The model supports both text-to-video and image-to-video workflows and promises further speed gains as optimisation continues. The twin launches underline how quickly open-source and low-cost AI tools are advancing in a market long dominated by proprietary systems. By driving prices toward a few cents per clip and publishing permissive licences, Chinese developers are pressuring Western rivals and could accelerate adoption of synthetic video across advertising, entertainment and e-commerce.
Fuckk...this new open-source AI model from China claims to outperform OpenAI’s o3-mini and closely match Claude 4 Opus on their own benchmarks. Huge if true!! SOTA open-source models are raining cats and dogs from China. https://t.co/9hvoYap2YT
The One Question To Ask Yourself So You Can Get The Most Out Of AI https://t.co/NYzIinGQ4l https://t.co/m4Vu5yN2iB
Smart people use AI to get smarter https://t.co/1s2zVclkJT