OpenAI has released its latest AI model, GPT-4.1, without providing the customary safety report, also known as a system card, drawing criticism from AI safety researchers and industry observers. The company has reportedly reduced the time and resources dedicated to safety testing for its AI models, raising concerns about the adequacy of safeguards. Additionally, OpenAI has quietly scaled back some of its safety commitments, including no longer requiring safety tests for finetuned models. Reports indicate that OpenAI’s partner had limited time to test the new AI model, further fueling apprehensions about the rushed deployment. OpenAI has also given itself more flexibility regarding safety measures if competitors release what it deems 'high-risk' models. Meanwhile, the company has hired the team behind a GV-backed AI evaluation platform, signaling ongoing efforts in AI development despite the controversies surrounding safety protocols.
OpenAI partner says it had relatively little time to test the company’s o3 AI model: https://t.co/hfofSFqZd8 by TechCrunch #infosec #cybersecurity #technology #news
So, it turns out OpenAI's partner had just a tiny bit of time to test their shiny new AI models. I mean, who needs proper testing anyway? Check out the inside scoop on the rush job that might just redefine "quick and dirty" in AI. Read more here: https://t.co/Nz2lYs4l0L
AI companies are breaking their safety commitments. TechCrunch is reporting that OpenAI is shipping GPT-4.1 with no plans for a system card. Ex-OpenAI safety researcher Steven Adler points out that releasing system cards has been a key part of AI companies meeting their safety https://t.co/IuWsQCIVUF