Sync Labs has launched 'lipsync-1.9-beta,' a groundbreaking AI model for lip-syncing that requires no training data. The zero-shot model allows seamless generation and editing of natural speech in live-action, animation, and AI-generated videos. This technology is set to revolutionize dubbing, enabling faster and more accurate synchronization of audio and video. The model has been integrated into Sieve's AI ecosystem, further enhancing its capabilities. The world's first feature film dubbed using AI has been released, showcasing the potential of these advancements to make dubbing more efficient while preserving emotional authenticity. Millions could gain access to knowledge, entertainment, and connection regardless of their native tongue through these innovations.
who’s building AI for generating videos of sign language interpreters from audio input? https://t.co/l3ZFZu53F6
imagine millions gaining access to knowledge, entertainment, and connection regardless of their native tongue we're excited to announce our partnership with @sievedata to power lipsync for their best in class AI dubbing workflow check it out 🧵 https://t.co/4Dige6y4yG
We’re excited to announce our partnership with @synclabs_so to bring their newly released 1.9.0-beta model for zero-shot lipsync natively into the Sieve ecosystem! 🧵 https://t.co/WEpvGX5yVP