The release of the CogVideoX-5B-I2V model by the ChatGLM team marks a significant advancement in open-source AI. This new model allows users to generate videos from images, using an image as a background along with prompts. The model is now available on GitHub and can be run on free-tier Colab and Gradio space. Additionally, the Qwen2-VL-2B model developed by the Alibaba Qwen team now supports both video and image processing, thanks to contributions from the AI community. Users are advised to duplicate the space with a L4s to avoid long waiting queues.
🚀 Qwen 2.5 models are now running locally in browsers powered by #WebLLM & @WebGPU ! Multilingual, long-context, a variety of model sizes -- try it now at @huggingface sapce! 👇 https://t.co/ka0OO7L5CH https://t.co/4O5eeq30x3
The latest version of CogVideoX is here 🔥🔥 CogVideoX-5B-I2V🚀 the image to video model released by @ChatGLM Model: https://t.co/V9OSYkBfqX Demo: https://t.co/c8qteAKSe3 ✨ The new model lets you use an image as a background along with prompts to create videos. ✨ With this…
Thank you to the passionate developers for your continued support and patience. CogVideoX-5B-I2V, release!😀 Github: https://t.co/VNpl283CPS CogVideoX-5B-I2V model: https://t.co/85AiDO6YcD Gradio space: https://t.co/f0dR1IqrCT