OpenAI chief executive Sam Altman told regulators and Wall Street executives at a Federal Reserve banking conference on 22 July that artificial-intelligence voice-cloning tools are poised to trigger a “significant impending fraud crisis.” He warned that some financial institutions still allow large transfers to be authorised with voice-print authentication, a method he said AI has “fully defeated.” Altman urged banks to overhaul verification systems before criminals exploit the technology “very, very soon.” Altman’s remarks, delivered during a panel moderated by Fed Vice Chair for Supervision Michelle Bowman, echoed growing concern among law-enforcement and cybersecurity officials about AI-driven impersonation scams. The Federal Trade Commission estimates US consumers lost $12.5 billion to fraud last year, and the agency has launched a voice-cloning challenge to spur protective technologies. The FBI has also issued alerts on synthetic audio and video cons. Days after the Fed appearance, Altman expanded his warnings in a podcast with comedian Theo Von, cautioning that conversations with ChatGPT are not protected by legal privilege and could be subpoenaed in court. He said the industry lacks a clear framework for safeguarding sensitive user data, noting that many people—particularly young users—treat the chatbot as a therapist or life coach. Altman’s twin messages underscore the pressure on financial firms and policymakers to update security standards and privacy law as generative AI becomes ubiquitous. The Biden administration is expected to release an “AI Action Plan” in the coming days, while OpenAI has opened a Washington office to deepen its engagement with lawmakers.
Sam Altman admitted that your ChatGPT prompts can be seen by corrupt agencies and rogue nations. Basically, anyone OpenAI thinks should access. Services like https://t.co/Kpoioi8tQp and https://t.co/Gxx29FOofr never share your prompts because they simply cannot because they
Sam Altman telling Theo Von that conversations with ChatGPT are actually not private and can be used in lawsuits. https://t.co/vh7cVYu6Y4
🛌 Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist. We still haven’t worked out how to keep sensitive conversations private with AI, mostly because there’s no built-in confidentiality when it’s not a human on the other side. 🩺 As part of its https://t.co/rVo6Iu1uSd