
Jan Leike, the former co-head of OpenAI's now-defunct superalignment team, has joined Anthropic on May 28. Leike, a co-inventor of Reinforcement Learning from Human Feedback (RLHF), left OpenAI citing concerns about the company's commitment to safety. His departure follows the resignation of Ilya Sutskever and others from OpenAI. Leike's new role at Anthropic will involve continuing his work in alignment research and advancing the quest for Artificial General Intelligence (AGI). Leike's former safety team at OpenAI focused on long-term risks.
New from me: Less than two weeks ago, @janleike quit his role as co-leader of @OpenAI's superalignment team just after @ilyasut resigned. that team was then dissolved. Today, Leike said he has a new role — at OpenAI rival @AnthropicAI. https://t.co/Av2MZbg6KY
Anthropic hires former OpenAI safety lead to head up new team: https://t.co/f2MBoQ0xvU by TechCrunch #infosec #cybersecurity #technology #news
Jan Leike Leaves OpenAI for Anthropic on May 28, Citing Safety Concerns https://t.co/t16tkcQ5Zx


