A new open-source text-to-video model with 5 billion parameters is set to be released soon, as indicated by various sources in the AI community. This development has sparked discussions about the quality of open-source video models and the potential for advancements in this area. Notable inquiries have been made regarding whether major companies like NVIDIA, Meta, and xAI will release their own open-source models or follow up with improved versions after the anticipated LLaMA 3.1 model, which is expected to push the boundaries of video model capabilities.