Loading...
Loading...
Browse all stories on DeepNewz
VisitSSI announces AI safety breakthrough by mid-2025?
Yes • 50%
No • 50%
Official announcement or publication by SSI
Ilya Sutskever, Daniel Gross, and Daniel Levy Launch Safe Superintelligence Inc.
Jun 19, 2024, 05:15 PM
Ilya Sutskever, co-founder of OpenAI, has launched a new company called Safe Superintelligence Inc. (SSI), which aims to develop safe superintelligence. Sutskever's new venture focuses exclusively on creating superintelligent AI that prioritizes safety. The company is co-founded by Daniel Gross, a former Apple AI executive, and Daniel Levy, who worked with Sutskever at OpenAI. SSI's mission is to advance AI capabilities rapidly while ensuring safety measures are always ahead. This initiative marks a significant shift from OpenAI's approach, emphasizing the importance of superintelligence safety in the AI industry. The company aims for a straight-shot to superintelligence with no near-term products, highlighting a departure from revenue-driven models.
View original story
Bias and Fairness • 25%
Robustness and Reliability • 25%
Transparency and Explainability • 25%
Other • 25%
Yes • 50%
No • 50%
New safety protocols • 25%
Partnerships with other organizations • 25%
New AI safety research center • 25%
Other initiatives • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Institute authorized • 25%
Institute dismantled • 25%
Temporary extension granted • 25%
Other outcome • 25%
Alignment Techniques • 25%
Deception Detection • 25%
Reinforcement Learning Safety • 25%
Other • 25%
Machine Learning Engineer • 34%
AI Ethics Researcher • 33%
AI Safety Expert • 33%
Tiger Global • 25%
Sequoia Capital • 25%
Andreessen Horowitz • 25%
SoftBank • 25%
Academic Institution • 33%
Government Agency • 33%
Private Sector Company • 34%