Pika Labs has introduced an audio-driven performance model that produces hyper-realistic, lip-synced videos in near real-time. The system converts a selfie and a sound clip into high-definition footage in roughly six seconds, according to the company’s announcement on 11 August. The model supports videos of any length and a range of visual styles—from photorealistic and anime to pixel art—and delivers output at up to 720p resolution. Pika Labs says the upgrade is 20 times faster and cheaper than its previous generation, addressing cost and speed barriers that have limited wider adoption of AI-generated video. Early testers report smooth, accurate lip movements and expressive avatars that go a long way toward closing the ‘uncanny valley’ gap. The release underscores the rapid pace of innovation in generative video, where start-ups such as Pika Labs are racing to combine realism, speed and affordability for creators and enterprises alike.
This AI feels too human. Higgsfield's Seedance Pro integration is delivering full-on performances. This isn't just video, this is acting. I'm a little freaked out. https://t.co/tak5uRX5FI
this is probably the most human AI we’ve seen yet. Higgsfield just added Seedance Pro + presets to its suite and became the fastest film set in the world! examples, prompts and everything you need to know in this thread 🧵 https://t.co/qqHjEt9nnL
This is true art. Higgsfield's Seedance AI delivers a stunning performance. https://t.co/PPOIDgdDKc