Loading...
Loading...
Browse all stories on DeepNewz
VisitNew Head of AI Safety at OpenAI by End of 2024?
Yes • 50%
No • 50%
OpenAI press releases or official announcements
OpenAI Leaders Ilya Sutskever and Jan Leike Resign Over AI Safety Concerns
May 17, 2024, 04:15 PM
Two veteran OpenAI employees, Ilya Sutskever and Jan Leike, have resigned, citing concerns over the company's prioritization of 'shiny products' over AI safety. Leike, who co-led the superalignment team with Sutskever, stated that the team struggled for resources and that safety culture and processes had been deprioritized. Leike had been with OpenAI for 3½ years and will join a new initiative with colleagues from Boston Dynamics and DeepMind. The superalignment team, responsible for controlling 'superintelligent' AI, has been dissolved and its responsibilities integrated into broader safety research efforts. The team was promised 20% of OpenAI's compute resources, but requests for a fraction of that were often denied. This shift has raised concerns about OpenAI's commitment to AI safety, with multiple researchers leaving the company due to these issues.
View original story