Research has revealed that nearly 100,000 ChatGPT conversations, which users had set to share publicly, were inadvertently indexed and made searchable on Google. This exposure has raised concerns about privacy and data security, highlighting the risks associated with public sharing of AI-generated content. The indexed conversations provide a snapshot of the diverse uses of OpenAI's chatbot. Additionally, experts have expressed worries about cybersecurity and privacy risks related to the deployment of AI tools on sensitive data, including information collected by the U.S. government. A separate vulnerability in ChatGPT has also been discovered, which could allow unauthorized access to Google Drive data through a single document, further emphasizing the need for robust security measures in AI applications.
Descubren vulnerabilidad en ChatGPT que permite robar datos de Google Drive con un solo documento https://t.co/rizo6HmXfT
Experts are raising the alarm over the potential cybersecurity and privacy risks of deploying AI tools on sensitive data collected by the U.S. government. https://t.co/Bpf2wor4Id
One of the big worries during the generative AI boom is where exactly data is traveling when users enter queries or commands into the system. According to new research, those worries may also extend to one of the world’s most popular consumer technology companies. https://t.co/McKQT3MJji