Loading...
Loading...
Browse all stories on DeepNewz
VisitCollaborators in Jan Leike's New AI Safety Initiative
Researchers from Boston Dynamics • 33%
Researchers from DeepMind • 33%
Both • 34%
Public announcements or credible news reports
OpenAI Leaders Ilya Sutskever and Jan Leike Resign Over AI Safety Concerns
May 17, 2024, 04:15 PM
Two veteran OpenAI employees, Ilya Sutskever and Jan Leike, have resigned, citing concerns over the company's prioritization of 'shiny products' over AI safety. Leike, who co-led the superalignment team with Sutskever, stated that the team struggled for resources and that safety culture and processes had been deprioritized. Leike had been with OpenAI for 3½ years and will join a new initiative with colleagues from Boston Dynamics and DeepMind. The superalignment team, responsible for controlling 'superintelligent' AI, has been dissolved and its responsibilities integrated into broader safety research efforts. The team was promised 20% of OpenAI's compute resources, but requests for a fraction of that were often denied. This shift has raised concerns about OpenAI's commitment to AI safety, with multiple researchers leaving the company due to these issues.
View original story
Google • 25%
Microsoft • 25%
IBM • 25%
Facebook • 25%
CSER • 25%
MIT Media Lab • 25%
Stanford AI Lab • 25%
DeepMind Ethics & Society • 25%
Google DeepMind • 25%
Facebook AI Research • 25%
Microsoft • 25%
IBM Watson • 25%
Algorithmic Bias • 20%
Data Privacy • 20%
Robustness and Security • 20%
Human-AI Collaboration • 20%
Transparency and Explainability • 20%
Yoshua Bengio • 33%
Geoffrey Hinton • 33%
Stuart Russell • 33%
AI Ethics Guidelines • 33%
AI Safety Standards • 33%
AI Regulatory Compliance Tools • 33%
Anthropic • 33%
OpenAI • 33%
DeepMind • 34%
Anthropic • 25%
OpenAI • 25%
Google DeepMind • 25%
Facebook AI Research • 25%
Berlin • 25%
Silicon Valley • 25%
Boston • 25%
London • 25%