Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI and Anthropic Partner with US AI Safety Institute and NIST for AI Model Testing
Aug 29, 2024, 12:56 PM
OpenAI and Anthropic have entered into agreements with the US AI Safety Institute and NIST to collaborate on the testing and evaluation of new AI technologies. These agreements grant the US AI Safety Institute early access to major new AI models to assess their capabilities and risks. The collaboration aims to enhance AI safety by allowing the government to probe these models for safety and security flaws. This initiative marks a significant step in advancing AI safety research and testing.
View original story
Markets
Yes • 50%
No • 50%
Official reports or press releases from the US AI Safety Institute, NIST, OpenAI, or Anthropic
No • 50%
Yes • 50%
Official announcements from the US AI Safety Institute or NIST
No • 50%
Yes • 50%
Official announcements from OpenAI or the US AI Safety Institute
Privacy protections • 25%
Bias reduction • 25%
General safety improvements • 25%
Security enhancements • 25%
Official reports or press releases from the US AI Safety Institute or NIST
Other metric • 25%
Transparency metric • 25%
Fairness metric • 25%
Robustness metric • 25%
Official announcements from the US AI Safety Institute or NIST
Other safety flaw • 25%
Bias in decision-making • 25%
Security vulnerability • 25%
Privacy issue • 25%
Official reports or press releases from the US AI Safety Institute, NIST, OpenAI, or Anthropic