Google DeepMind has released Magenta RealTime, an open-weights music generation model designed for real-time audio generation with continuous user control. The model features approximately 800 million parameters and is trained on around 190,000 hours of instrumental stock music. It operates by generating audio in two-second chunks conditioned on the prior 10 seconds of context, delivering high-fidelity 48 kHz stereo sound. Magenta RealTime is available under the permissive Apache 2.0 license and can be accessed on Colab TPUs. This development aims to provide musicians with immediate feedback during live performances, addressing limitations in existing real-time music generation models.
Incredible. @GoogleDeepMind just released Magenta RealTime: An Open-Weights Live Music Model Apache 2.0 licensed 🔥 Real-time music models still lag when artists need immediate feedback on stage. Magenta RealTime streams high-fidelity audio in two-second bursts faster than it https://t.co/0fSL8TJJKB
WOW! DeepMind *just* dropped Magenta Real-time - Apache 2.0 licensed 🔥 > 800M params transformer, trained on ~190K hours of instrumental stock music > adapts MusicLM for real-time generation via 2s audio chunks (conditioned on prior 10s context) > 48 KHz Stereo > MusicCoCa: New https://t.co/D0spwI4cn9
So excited to welcome Google's model #1000 at Hugging Face: Magenta Real Time!🤯 🎷Music generation model ⚡️Real-time 👀Permissive license 🤏800 million parameters Model: https://t.co/exBdIs9wuY Blog: https://t.co/TT2Koo6Ihf