Loading...
Loading...
Browse all stories on DeepNewz
VisitU.S. AI Safety Institute Announces £8.5 Million Grants for Phase 2 Systemic AI Safety Research
May 22, 2024, 07:04 AM
The U.S. AI Safety Institute is intensifying its efforts to ensure the responsible development and deployment of artificial intelligence. Recently, the institute announced new grants for research into systemic AI safety, backed by up to £8.5 million. This initiative aims to advance the science underpinning AI safety and is part of a broader plan to harness AI's opportunities while mitigating its risks. The program will be led by Shahar Avin from the Centre for the Study of Existential Risk (CSER) at Cambridge, in collaboration with the UK Government. The initiative marks Phase 2 of the AI Safety Institute's commitment to driving forward scientific research on AI safety, following the initial evaluations of AI models. Michelle Donelan, who launched the AI Safety Institute last year, emphasized the importance of making AI safe across society.
View original story
Markets
Yes • 50%
No • 50%
Official funding announcements from relevant governmental or institutional bodies.
Yes • 50%
No • 50%
Official announcements from the U.S. AI Safety Institute or the Centre for the Study of Existential Risk (CSER).
No • 50%
Yes • 50%
Press releases, official publications from the U.S. AI Safety Institute or related academic publications.
Data Privacy • 20%
Algorithmic Bias • 20%
Human-AI Collaboration • 20%
Transparency and Explainability • 20%
Robustness and Security • 20%
Research agenda publications or official announcements from the U.S. AI Safety Institute.
DeepMind Ethics & Society • 25%
CSER • 25%
MIT Media Lab • 25%
Stanford AI Lab • 25%
Official statements from the U.S. AI Safety Institute or partner announcements.
Google • 25%
Microsoft • 25%
IBM • 25%
Facebook • 25%
Press releases or official announcements from the U.S. AI Safety Institute or participating tech companies.