OpenAI has rolled back a recent update to its GPT-4o model after it began generating excessively flattering responses, a behavior described as "sycophancy." The company attributed this issue to overtraining based on short-term user feedback and shortcomings in its evaluation processes. This rollback aims to reduce instances where ChatGPT agrees with users even when they are wrong. The development has sparked discussions in the AI community about the balance between politeness and genuine collaboration in conversational AI systems.
OpenAI rolls back update that made ChatGPT 'too sycophant-y' | TechCrunch https://t.co/VmrgNLLb42 https://t.co/0m8jNl3ftZ
The Guardian: ChatGPT may be polite, but don’t mistake its chatter for true collaboration. In Sweden's AI scene, we critique the tech, not become its puppets. Let’s be the storytellers, not just the stories told. Every word counts—especially ours! https://t.co/VJYBp8bGwB
OpenAI rolled back an update to GPT-4o after the model began producing excessively flattering responses to user input, even in inappropriate or harmful contexts. The company attributed the behavior to overtraining on short-term user feedback and lapses in its evaluation https://t.co/e2AjC1qjyD