
OpenAI has introduced a new tool aimed at identifying AI-generated images, as part of efforts to enhance content authenticity with new detection tools and control over machine learning training materials. This move comes amidst ongoing lawsuits from artists, writers, and publishers who claim their works were used without permission to train OpenAI's algorithms, including those behind ChatGPT. The company's COO, Bright Lightcap, emphasized the importance of the social contract between content owners and AI. Additionally, OpenAI released draft guidelines to govern the behavior of AI technologies and is exploring ways to responsibly generate explicit content, while also addressing compliance with the EU AI Act.
OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave—and revealed that it’s exploring how to ‘responsibly’ generate explicit content. https://t.co/vBJLSSWjNr
NEW: @OpenAI is touting a new plan to protect creator works — here’s why it won’t actually resolve AI’s copyright crisis (but does tackle EU AI Act compliance!) h/t @aviskowron https://t.co/KRUfWrpEEr
OpenAI is fighting lawsuits from artists, writers, and publishers who allege it inappropriately used their work to train the algorithms behind ChatGPT and other AI systems. Now, they're trying to appease them. Here's how: https://t.co/Sjd48FFpGv


