Pika Labs Debuts Faster, Cheaper AI Model for Real-Time Lip-Sync Video
Pika Labs unveiled an audio-driven performance model that generates high-definition, lip-synced video clips in roughly six seconds, regardless of clip length or visual style. The start-up says the system produces "hyper-real" facial expressions while operating at about 20 times the speed and one-twentieth the cost of its previous generation. The release intensifies competition among generative-AI firms racing to automate video creation for entertainment, marketing and social media. Early testers showcased real-time results set to complex audio tracks, underscoring the model’s potential to streamline production workflows and lower barriers for independent creators.
Sources
- TomLikesRobots🤖
Ooh. I'm very impressed by Pika's new lipsync model. I've had a chance to check it out - very fast and listens to prompts. Check it out. Voice here from ElevenLabs. https://t.co/9qKmddCmgW https://t.co/Pjer498IG2
- Proper
Wow, look at those lips!! 🫦 Pika let me test their new model early and it's very impressive. Check out their announcement below 👇 https://t.co/05NOMh6IaB https://t.co/YEzutrc0Ef
- Stelfie the Time Traveller
Lucky enough to get early access to the new @pika_labs lip sync model 🎤 Thought I’d have a bit of fun with a super challenging song! The model is fast, followed well the prompts and delivered great results! Any length and any style! https://t.co/d4V5e7lbmQ https://t.co/Hdb8HRToOu