
OpenAI has launched a 'disinformation detector' to combat the growing threat of AI-generated deepfakes, particularly as the U.S. presidential election approaches. This tool aims to identify and authenticate content created by AI, addressing concerns that deepfakes could sway public opinion and disrupt the electoral process. Despite the rising number of startups offering deepfake detection services, including Deep Media and ElevenLabs, their capabilities remain largely untested. Experts emphasize the importance of transparency and accuracy in these tools to ensure their effectiveness. Additionally, lawmakers are intensifying efforts to regulate harmful AI deepfakes, recognizing their potential misuse in various sectors, including elections and personal data security. OpenAI's detector is designed to accurately identify images made with its Dall-E 3 system.





















As deepfakes flourish amid Lok Sabha elections, a look at how AI detection tools work #ArtificialInteligence ✍️ Ankita Kishor Deshkar https://t.co/XTEo6R4ZzE
Deepfakes and influencers: The digital election in India https://t.co/VDYfQUBMAC
.@susannareid100 questions fraud expert Simon Horswell what people should do if their face is used to create a deepfake. Simon suggests looking for some glitching around the nose and eyes with the deepfake. https://t.co/Fn0CBv1rlM