
OpenAI's release of its Sora text-to-video AI tool last month has sparked a mix of excitement and concern among researchers and the public. The technology, which has the potential to generate AI nude videos, raises issues about misuse. In an exclusive interview with Joanna Stern of the Wall Street Journal, OpenAI CTO Mira Murati provided insights into the development and planned rollout of Sora. The tool, which has been available to a select group of "red teamers", visual artists, and designers for security and stability testing since February, is set to be released this year. Murati discussed plans to add sound to Sora videos and include metadata as a watermark to address concerns about authenticity. Despite the excitement, there has been criticism over OpenAI's transparency regarding the data used to train Sora, with hints that Facebook and Instagram videos might have been used without directly confirming this.





Watch: In an interview with @JoannaStern, OpenAI CTO says the company’s text-to-video model will be available this year but ducks questions on how the model was trained https://t.co/uym4vnZZmM https://t.co/uym4vnZZmM
Only one of these can be true: -The CTO of OpenAI doesn't know what Sora was trained on -She does, but blatantly lies during softball interviews This is based on what **she said** on video. Question: Which of these options helps you trust this company with advanced AGI?
Jaw-droppingly bad that on Sora, OpenAI executive says: "I'm not going to go into details on the data that was used." Insane.