AI-video company Runway has introduced Act-Two, a next-generation motion-capture model that builds on last year’s Act-One release. The new system is available immediately for enterprise users on the Runway platform. Act-Two can extract an actor’s head, face, body and hand movements from a single performance video and apply them to any digital character, maintaining timing, delivery and subtle expressions across styles and environments. Runway says the upgrade delivers higher fidelity and consistency while eliminating the need for green screens or multi-camera rigs. The launch adds to a wave of generative-AI tools aimed at streamlining film and advertising production. Analysts say more realistic, low-cost performance capture could accelerate content creation while raising fresh questions about the future role of human actors and crew.
Incredibly cool what @DecartAI just launched. Their new model, Mirage, is the first real-time, AI-generated video experience. Here's how it works: https://t.co/cyA4sDaOXq
This is very cool, Live-Stream Diffusion (LSD) AI Mode. Input any video stream (camera/video) to a diffusion video in real-time (<40ms latency). Congrats @DLeitersdorf & @DecartAI team! https://t.co/7R1X0J2pmO
Looks like now you can vibe-code a game in 30 minutes. Take any video stream or your favorite video games and set them in any alternative universe of your choosing. MirageLSD just dropped. the first Live-Stream Diffusion (LSD) AI Model. A fundamental roadblock in traditional https://t.co/s40X80bfTB https://t.co/oYMVHvE39U