
A Microsoft AI engineer expressed significant concerns over the safety of the company's AI model, particularly Bing/DALL-E 3, describing it as 'not a safe model.' This revelation, reported by CNBC and involving Microsoft (MSFT), comes amidst broader discussions on AI accountability within the tech industry. Whistleblower Shane Jones highlighted issues with Microsoft's AI Image Creator, which has been producing extremely disturbing images. Jones' concerns, reported by the Washington Post in December, led him to approach FTC chair Lina Khan, signaling deep-seated issues within Microsoft's AI development practices. The situation, also covered by reporter Hayden, has sparked debate over the need for regulatory oversight and Microsoft's handling of internal warnings.

Props to whistleblower Shane Jones and Hayden for her great reporting. Whether you agree with its content policy or not, Microsoft's response to Jones' concerns is... not encouraging. (I also got CoPilot to threaten to torture and kill me last week.) In the absence of regs, these… https://t.co/0W5zOHlENX https://t.co/fajyjX0szn
Time to talk about #AI accountability at Microsoft. Today, whistleblower Shane Jones went to @linakhanFTC about how Microsoft's AI Image Creator goes off the rails by making extremely disturbing pictures: https://t.co/7PUXSFncgr In December, I reported @washingtonpost that…
Massive yikes, even Microsoft's own employees are concerned with what Bing/DALL-E 3 are outputting. “It’s when I first realized, wow this is really not a safe model." This article is quite the read, Microsoft's behavior is beyond irresponsible. https://t.co/aI5hthmTY7 https://t.co/EgDMgvathz