Loading...
Loading...
Browse all stories on DeepNewz
VisitFocus Area for Phase 3 of AI Safety Research
Algorithmic Bias • 20%
Data Privacy • 20%
Robustness and Security • 20%
Human-AI Collaboration • 20%
Transparency and Explainability • 20%
Research agenda publications or official announcements from the U.S. AI Safety Institute.
U.S. AI Safety Institute Announces £8.5 Million Grants for Phase 2 Systemic AI Safety Research
May 22, 2024, 07:04 AM
The U.S. AI Safety Institute is intensifying its efforts to ensure the responsible development and deployment of artificial intelligence. Recently, the institute announced new grants for research into systemic AI safety, backed by up to £8.5 million. This initiative aims to advance the science underpinning AI safety and is part of a broader plan to harness AI's opportunities while mitigating its risks. The program will be led by Shahar Avin from the Centre for the Study of Existential Risk (CSER) at Cambridge, in collaboration with the UK Government. The initiative marks Phase 2 of the AI Safety Institute's commitment to driving forward scientific research on AI safety, following the initial evaluations of AI models. Michelle Donelan, who launched the AI Safety Institute last year, emphasized the importance of making AI safe across society.
View original story
DeepMind Ethics & Society • 25%
CSER • 25%
MIT Media Lab • 25%
Stanford AI Lab • 25%