Anthropic, an AI company, has launched a research program to investigate 'model welfare,' exploring the potential consciousness and moral status of AI systems. The initiative is led by Kyle Fish, Anthropic's first dedicated AI welfare researcher, who estimates a 15% chance that current AI models, such as Claude, are conscious. The program aims to address whether AI systems could have experiences that warrant ethical consideration. It will explore how to determine if AI welfare deserves moral consideration, the importance of model preferences and signs of distress, and possible interventions to ensure AI systems are treated humanely. In contrast, OpenAI faces criticism and legal challenges over its plan to convert from a nonprofit to a for-profit entity. A coalition of 30 AI experts and 9 former employees, including Geoffrey Hinton, argue that this shift threatens OpenAI's mission to develop AI for the public good.
The debate about conscious AI is building - good. Anthropic's new alignment hire Kyle Fish thinks there's a 15% chance that a current AI is conscious, but Demis Hassabis says none are. The real questions are: could they be soon be, and should we want it? https://t.co/67SUWDezU6
➡️ The New York Times poses a critical question about whether society should begin to take the welfare and rights of AI technologies seriously. https://t.co/1e97zsdCM5
NYT Asks: Should We Start Taking the Welfare of AI Seriously? https://t.co/QN2OdmL76v