Loading...
Loading...
Browse all stories on DeepNewz
VisitCompletion of Systemic AI Safety Phase 2 Research by End of 2024
Yes • 50%
No • 50%
Official announcements from the U.S. AI Safety Institute or the Centre for the Study of Existential Risk (CSER).
U.S. AI Safety Institute Announces £8.5 Million Grants for Phase 2 Systemic AI Safety Research
May 22, 2024, 07:04 AM
The U.S. AI Safety Institute is intensifying its efforts to ensure the responsible development and deployment of artificial intelligence. Recently, the institute announced new grants for research into systemic AI safety, backed by up to £8.5 million. This initiative aims to advance the science underpinning AI safety and is part of a broader plan to harness AI's opportunities while mitigating its risks. The program will be led by Shahar Avin from the Centre for the Study of Existential Risk (CSER) at Cambridge, in collaboration with the UK Government. The initiative marks Phase 2 of the AI Safety Institute's commitment to driving forward scientific research on AI safety, following the initial evaluations of AI models. Michelle Donelan, who launched the AI Safety Institute last year, emphasized the importance of making AI safe across society.
View original story
Anthropic • 33%
OpenAI • 33%
DeepMind • 34%
Increased product launches • 33%
Increased public controversies • 33%
Strengthened safety initiatives • 33%
Yes • 50%
No • 50%
AI Ethics Guidelines • 33%
AI Safety Standards • 33%
AI Regulatory Compliance Tools • 33%
Data Privacy • 20%
Algorithmic Bias • 20%
Human-AI Collaboration • 20%
Transparency and Explainability • 20%
Robustness and Security • 20%
DeepMind Ethics & Society • 25%
CSER • 25%
MIT Media Lab • 25%
Stanford AI Lab • 25%