Loading...
Loading...
Browse all stories on DeepNewz
VisitWhat type of new AI safety metric will be developed by December 31, 2024?
Fairness metric • 25%
Robustness metric • 25%
Transparency metric • 25%
Other metric • 25%
Official announcements from the US AI Safety Institute or NIST
OpenAI and Anthropic Partner with US AI Safety Institute and NIST for AI Model Testing
Aug 29, 2024, 12:56 PM
OpenAI and Anthropic have entered into agreements with the US AI Safety Institute and NIST to collaborate on the testing and evaluation of new AI technologies. These agreements grant the US AI Safety Institute early access to major new AI models to assess their capabilities and risks. The collaboration aims to enhance AI safety by allowing the government to probe these models for safety and security flaws. This initiative marks a significant step in advancing AI safety research and testing.
View original story
Bias and Fairness • 25%
Robustness and Reliability • 25%
Transparency and Explainability • 25%
Other • 25%
New safety protocols • 25%
Partnerships with other organizations • 25%
New AI safety research center • 25%
Other initiatives • 25%
Minor changes • 25%
Major changes • 25%
No changes • 25%
Model suspension • 25%
OpenAI's model • 33%
Anthropic's model • 33%
Both equally • 34%
No significant issues found • 25%
Minor issues found • 25%
Moderate issues found • 25%
Critical issues found • 25%
Large AI models • 25%
Small specialized models • 25%
General AI safety protocols • 25%
Other • 25%
Data Security • 25%
Algorithmic Bias • 25%
Transparency • 25%
Other • 25%
New international AI safety guidelines • 25%
Increased AI safety funding • 25%
Formation of a global AI safety coalition • 25%
No significant outcome • 25%
Yes • 50%
No • 50%
New international treaty • 25%
Joint research initiatives • 25%
No significant outcome • 25%
Other collaborative efforts • 25%
No • 50%
Yes • 50%
Privacy protections • 25%
Bias reduction • 25%
General safety improvements • 25%
Security enhancements • 25%