Loading...
Loading...
Browse all stories on DeepNewz
VisitAnthropic Publishes Major AI Safety Paper by November 2024?
Yes • 50%
No • 50%
Official publication announcements from Anthropic or major AI research journals
Jan Leike Leaves OpenAI for Anthropic on May 28 Amid Safety Concerns
May 28, 2024, 05:24 PM
Jan Leike, the former co-head of OpenAI's now-defunct superalignment team, has joined Anthropic on May 28. Leike, a co-inventor of Reinforcement Learning from Human Feedback (RLHF), left OpenAI citing concerns about the company's commitment to safety. His departure follows the resignation of Ilya Sutskever and others from OpenAI. Leike's new role at Anthropic will involve continuing his work in alignment research and advancing the quest for Artificial General Intelligence (AGI). Leike's former safety team at OpenAI focused on long-term risks.
View original story
Yes • 50%
No • 50%
Google • 20%
Microsoft • 20%
IBM • 20%
Facebook • 20%
Amazon • 20%
Yoshua Bengio • 33%
Geoffrey Hinton • 33%
Stuart Russell • 33%
Facebook AI Research • 25%
IBM Watson • 25%
Microsoft • 25%
Google DeepMind • 25%
Facebook AI Research • 25%
Anthropic • 25%
OpenAI • 25%
Google DeepMind • 25%