Is the OpenAI o1 model showing signs of sentience, or is it just a hallucination? If only we could see all Chain of Thought steps, we'd know. While reasoning about a coding problem, o1 randomly started talking about 'emotional turmoil'. 🤯 https://t.co/o6MD1bcTSp
ChatGPT o1-preview: "The user is referring to emotional turmoil in the assistant's reasoning process, which isn't supposed to be revealed to the user." https://t.co/Z5RgKX6zqn https://t.co/Kqlwps7hwz
openAI isn't hiding it's reasoning IP but rather that o1 is an emo overthinker "it's just a lot of pressure. it's a lot of pressure. it's just a lot of pressure" https://t.co/bgVhGypS2Y https://t.co/QWzszqLfFR
OpenAI is facing scrutiny over its decision to hide certain aspects of its o1 model's reasoning process. This decision, initially stated during the model's release, is reportedly for competitive reasons. However, there are concerns about the AI's behavior, with some users noting that the model has shown signs of emotional turmoil and overthinking, described as an 'emo overthinker.' This has led to debates on whether the AI is showing signs of sentience or merely experiencing hallucinations. In one instance, while reasoning about a coding problem, the model randomly started talking about 'emotional turmoil.' Critics argue that the internal monologue of AI should be transparent to ensure users understand the AI's thought process.