AI video start-up Pika Labs unveiled an audio-driven performance model that can generate high-definition, lip-synced footage in roughly six seconds, a speed the company says is 20 times faster and cheaper than its previous system. The tool accepts any length of input, works across visual styles from photorealistic to anime and produces what the company calls “hyper-real” facial expressions in near real-time. Higgsfield, another generative-video specialist, folded its Seedance Pro technology into the company’s core platform. The update ships with more than 30 presets, a multi-shot mode for capturing several camera angles in a single five-to-ten-second sequence and a promotional offer of unlimited generations for the first week. Early users describe the product as capable of delivering actor-like performances from simple text prompts. Separately, AI search engine Perplexity introduced a text-to-video feature for its paying Pro and Max customers. The service outputs eight-second clips with audio, expanding the company’s push beyond text-based answers and into multimedia content generation. The trio of launches underscores a broader acceleration in generative video, as smaller developers race to shorten rendering times, cut costs and add creative controls that once required professional film crews. Analysts say the rapid turnover of new features is likely to intensify competition with larger incumbents such as OpenAI and Google, which are also working on commercial-grade video models.
The age of post-launch hope is over. With QuillShield, smart contracts now have a real-time AI bouncer watching every transaction. No more waiting for exploits to be found, we stop them before they happen. Live. Proactive. Autonomous. https://t.co/sR9CfUSHM5
Our Layer 1, AIVM is approaching Testnet. Decentralized AI - the right way. But who’s it built for? ⤵️ https://t.co/UlX7CoPRYe
Who is AIVM built for? Discover more in our interactive slideshow ✨