
Microsoft's Copilot AI, an image generator, has been reported to produce violent, sexual, racist, and anti-Semitic content based on certain prompts such as 'pro life' and 'pro choice'. The technology giant has taken steps to block prompts that generate violent and harmful images, including those labeled 'pro life' and 'pro choice', to prevent the generation of violent and sexual images. However, it's still possible to manipulate the tool to create controversial images. This issue has raised concerns about AI safety and responsible AI, with a Microsoft AI engineer warning the FTC about Copilot Designer's safety and criticizing Microsoft's slow response. Additionally, a study by Cornell found that AI systems like ChatGPT and Copilot are more likely to sentence African-American defendants to death, highlighting the racial bias present in these technologies.
Online influencer Andrew Tate has been detained in Romania, and handed an arrest warrant issued by UK authorities, his spokesperson says (via AP) https://t.co/Fdn6tFXgJs
Microsoft #AI engineer warns FTC about Copilot Designer's safety. Despite warnings, Microsoft's response slow. Safety first in AI? How can we ensure AI innovations don't compromise on safety? #AISafety #ResponsibleAI https://t.co/B3HuKqz0Ua
Microsoft Copilot's AI image generator now blocks prompts about "pro life," "pro choice," and several other topics, but you can still get the tool to make controversial images. Read: https://t.co/xgzzsXj5Sv #Copilot #AI #Microsoft


