Scientists are advancing tools to identify AI-generated text, with a focus on watermarking as a potential solution. Governments are investing in watermarking technology to curb the spread of AI-generated content, although challenges remain in ensuring developers adopt and coordinate these methods. Recently, DeepMind and Hugging Face launched SynthID, which modifies LLM-generated text to embed a statistical signature while maintaining output quality. Google DeepMind has been implementing its watermarking technique on the Gemini chatbot for several months, and this tool is now accessible for developers to help detect AI-generated text. Despite the use of watermarking for AI-generated audio in some regions, experts emphasize the importance of public vigilance and media literacy in addressing voice cloning scams.
While some countries use watermarking for #AI-generated audio, it’s not a complete defense. Public vigilance and media literacy are key to combating voice cloning scams. 🎙️💡 Read more 👇 https://t.co/nI1OXKgxjU
Google DeepMind has been using its AI watermarking method on Gemini chatbot responses for months – and now the tool is available for any developer to make their own AI-generated text easy to detect. https://t.co/rbx61lCmCQ
[LG] Provably Robust Watermarks for Open-Source Language Models M Christ, S Gunn, T Malkin, M Raykova [Columbia University & UC Berkeley] (2024) https://t.co/fgmoADSLPc https://t.co/MlN5ytSMqr