Loading...
Loading...
Browse all stories on DeepNewz
VisitSSI releases AI safety framework by mid-2025?
Yes • 50%
No • 50%
Official announcements or publications from Safe Superintelligence Inc.
Ilya Sutskever, Daniel Gross, and Daniel Levy Launch Safe Superintelligence Inc. with Focus on AI Safety
Jun 19, 2024, 08:48 PM
Ilya Sutskever, the co-founder and former Chief Scientist of OpenAI, has launched a new company called Safe Superintelligence Inc. (SSI). The new venture aims to develop safe superintelligence, focusing on ensuring AI safety akin to nuclear safety rather than trust and safety. Sutskever is joined by Daniel Gross, who led Apple's AI efforts, and Daniel Levy, who worked alongside him at OpenAI. The company's mission is singular: to create a safe superintelligence, avoiding commercial distractions and investor pressures. Financial backing details remain undisclosed, but Gross has stated that raising capital will not be an issue. This move comes shortly after Sutskever's departure from OpenAI, where he played a pivotal role in advancing AI research. The SSI website's homepage outlines their commitment to a straight shot to safe superintelligence, indicating a new plan for AI development.
View original story
Bias and Fairness • 25%
Robustness and Reliability • 25%
Transparency and Explainability • 25%
Other • 25%
Yes • 50%
No • 50%
Bias reduction • 25%
Security enhancements • 25%
Privacy protections • 25%
General safety improvements • 25%
AI Governance Framework • 33%
AI Safety Research • 33%
AI Development Tools • 33%
Industry Experts • 33%
Top AI Researchers • 33%
Policy Makers • 33%