Loading...
Loading...
Browse all stories on DeepNewz
VisitPrimary Supporter of Jan Leike's AI Safety Initiative
Google • 25%
Microsoft • 25%
Tesla • 25%
Facebook • 25%
Financial disclosures or partnership announcements
OpenAI Leaders Ilya Sutskever and Jan Leike Resign Over AI Safety Concerns
May 17, 2024, 04:15 PM
Two veteran OpenAI employees, Ilya Sutskever and Jan Leike, have resigned, citing concerns over the company's prioritization of 'shiny products' over AI safety. Leike, who co-led the superalignment team with Sutskever, stated that the team struggled for resources and that safety culture and processes had been deprioritized. Leike had been with OpenAI for 3½ years and will join a new initiative with colleagues from Boston Dynamics and DeepMind. The superalignment team, responsible for controlling 'superintelligent' AI, has been dissolved and its responsibilities integrated into broader safety research efforts. The team was promised 20% of OpenAI's compute resources, but requests for a fraction of that were often denied. This shift has raised concerns about OpenAI's commitment to AI safety, with multiple researchers leaving the company due to these issues.
View original story
Google DeepMind • 25%
Facebook AI Research • 25%
Microsoft • 25%
IBM Watson • 25%
Google • 25%
Microsoft • 25%
IBM • 25%
Facebook • 25%
Google • 25%
Meta • 25%
Microsoft • 25%
OpenAI • 25%
Starts his own company • 25%
Joins another AI company • 25%
Goes into academia • 25%
Transitions to a non-AI field • 25%
Microsoft • 25%
Google • 25%
Amazon • 25%
IBM • 25%
Google • 25%
Amazon • 25%
Microsoft • 25%
Facebook • 25%
Anthropic • 25%
OpenAI • 25%
Google DeepMind • 25%
Facebook AI Research • 25%
Yoshua Bengio • 33%
Geoffrey Hinton • 33%
Stuart Russell • 33%
Microsoft • 25%
Google • 25%
Amazon • 25%
IBM • 25%
Researchers from Boston Dynamics • 33%
Both • 34%
Researchers from DeepMind • 33%