The EU has unveiled long-delayed recommendations to rein in the most advanced AI models such as OpenAI’s ChatGPT and help companies comply with the bloc’s sweeping new law. https://t.co/Bav4zfbXth https://t.co/55K4H1GdHS
EU unveils AI code of practice to help businesses comply with bloc’s rules https://t.co/saDh1qeeAs
🚨🇪🇺 EU ROLLS OUT RULES FOR BIGGEST A.I. PLAYERS - STARTS AS VOLUNTARY, ENFORCEMENT LOOMS The EU unveiled new rules demanding tech giants like OpenAI, Microsoft, and Google boost transparency, protect copyrights, and assess A.I. risks to public safety. Companies must reveal https://t.co/Sg9NpEPND8 https://t.co/uDY2mJID9T
The European Commission on Thursday published the final General-Purpose AI Code of Practice, a voluntary guide intended to help providers of large artificial-intelligence models comply with the bloc’s landmark AI Act. The code sets out practical measures on transparency, copyright, safety and security. All developers must disclose the data used to train their models and respect EU copyright rules, while extra safeguards apply to the most powerful systems, including OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama. The guidance takes effect on 2 August 2025 and will become enforceable a year later; companies that sign up gain greater legal certainty when demonstrating compliance. Brussels pressed ahead despite lobbying from nearly 50 European and US firms that sought a two-year pause, arguing the rules could hinder innovation. EU tech chief Henna Virkkunen urged model providers to adhere to the code, calling it a “clear, collaborative route” to meeting the AI Act’s requirements.