Loading...
Loading...
Browse all stories on DeepNewz
VisitLeading Institution for Next Phase of AI Safety Research Post-2024
CSER • 25%
MIT Media Lab • 25%
Stanford AI Lab • 25%
DeepMind Ethics & Society • 25%
Official statements from the U.S. AI Safety Institute or partner announcements.
U.S. AI Safety Institute Announces £8.5 Million Grants for Phase 2 Systemic AI Safety Research
May 22, 2024, 07:04 AM
The U.S. AI Safety Institute is intensifying its efforts to ensure the responsible development and deployment of artificial intelligence. Recently, the institute announced new grants for research into systemic AI safety, backed by up to £8.5 million. This initiative aims to advance the science underpinning AI safety and is part of a broader plan to harness AI's opportunities while mitigating its risks. The program will be led by Shahar Avin from the Centre for the Study of Existential Risk (CSER) at Cambridge, in collaboration with the UK Government. The initiative marks Phase 2 of the AI Safety Institute's commitment to driving forward scientific research on AI safety, following the initial evaluations of AI models. Michelle Donelan, who launched the AI Safety Institute last year, emphasized the importance of making AI safe across society.
View original story
Anthropic • 33%
OpenAI • 33%
DeepMind • 34%
Anthropic • 25%
OpenAI • 25%
Google DeepMind • 25%
Facebook AI Research • 25%
Researchers from Boston Dynamics • 33%
Researchers from DeepMind • 33%
Both • 34%
Google • 25%
Meta • 25%
Microsoft • 25%
OpenAI • 25%
Google • 25%
Meta • 25%
Microsoft • 25%
OpenAI • 25%
Increased product launches • 33%
Increased public controversies • 33%
Strengthened safety initiatives • 33%
Product Development • 25%
Marketing • 25%
Research and Development • 25%
AI Safety (new or existing initiatives) • 25%
AI Ethics Guidelines • 33%
AI Safety Standards • 33%
AI Regulatory Compliance Tools • 33%
Advisory role only • 34%
Co-developing policy • 33%
No significant influence • 33%
Data Privacy • 20%
Algorithmic Bias • 20%
Human-AI Collaboration • 20%
Transparency and Explainability • 20%
Robustness and Security • 20%