Loading...
Loading...
Browse all stories on DeepNewz
VisitWill a major safety flaw be identified in a new AI model by March 31, 2025?
Yes • 50%
No • 50%
Official reports or press releases from the US AI Safety Institute, NIST, OpenAI, or Anthropic
OpenAI and Anthropic Partner with US AI Safety Institute and NIST for AI Model Testing
Aug 29, 2024, 12:56 PM
OpenAI and Anthropic have entered into agreements with the US AI Safety Institute and NIST to collaborate on the testing and evaluation of new AI technologies. These agreements grant the US AI Safety Institute early access to major new AI models to assess their capabilities and risks. The collaboration aims to enhance AI safety by allowing the government to probe these models for safety and security flaws. This initiative marks a significant step in advancing AI safety research and testing.
View original story
Yes • 50%
No • 50%
OpenAI's model • 33%
Anthropic's model • 33%
Both equally • 34%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Privacy protections • 25%
Bias reduction • 25%
General safety improvements • 25%
Security enhancements • 25%
Other metric • 25%
Transparency metric • 25%
Fairness metric • 25%
Robustness metric • 25%