Loading...
Loading...
Browse all stories on DeepNewz
VisitWhat type of safety flaw will be identified in the first major AI model tested by the US AI Safety Institute by June 30, 2025?
Bias in decision-making • 25%
Security vulnerability • 25%
Privacy issue • 25%
Other safety flaw • 25%
Official reports or press releases from the US AI Safety Institute, NIST, OpenAI, or Anthropic
OpenAI and Anthropic Partner with US AI Safety Institute and NIST for AI Model Testing
Aug 29, 2024, 12:56 PM
OpenAI and Anthropic have entered into agreements with the US AI Safety Institute and NIST to collaborate on the testing and evaluation of new AI technologies. These agreements grant the US AI Safety Institute early access to major new AI models to assess their capabilities and risks. The collaboration aims to enhance AI safety by allowing the government to probe these models for safety and security flaws. This initiative marks a significant step in advancing AI safety research and testing.
View original story
OpenAI's model • 33%
Anthropic's model • 33%
Both equally • 34%
Pass • 33%
Fail • 33%
Deferred • 34%
Minor changes • 25%
Major changes • 25%
No changes • 25%
Model suspension • 25%
Bias and Fairness • 25%
Robustness and Reliability • 25%
Transparency and Explainability • 25%
Other • 25%
No significant issues found • 25%
Minor issues found • 25%
Moderate issues found • 25%
Critical issues found • 25%
Pass • 33%
Fail • 33%
Deferred • 34%
OpenAI • 50%
Anthropic • 50%
Data Security • 25%
Algorithmic Bias • 25%
Transparency • 25%
Other • 25%
Data breach • 25%
Unauthorized access • 25%
Malware infection • 25%
Other • 25%
No • 50%
Yes • 50%
Privacy protections • 25%
Bias reduction • 25%
General safety improvements • 25%
Security enhancements • 25%