Loading...
Loading...
Browse all stories on DeepNewz
VisitNIST to Test OpenAI, Anthropic Models for Safety, Including Doomsday Scenarios
Aug 29, 2024, 08:21 PM
The National Institute of Standards and Technology (NIST) has signed research and testing agreements with AI companies OpenAI and Anthropic. This collaboration will allow the U.S. government to access and test the companies' latest AI models for safety and security flaws, including doomsday scenarios, before their public release. The agreements establish a framework for the U.S. AI Safety Institute to conduct pre-release testing and collaborative research, focusing on evaluating the capabilities and safety risks of these AI models. This initiative is seen as a critical step towards transparency and responsible innovation in AI development, with the U.S. aiming to maintain its leadership in the field.
View original story
Pass • 33%
Fail • 33%
Deferred • 34%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Pass • 33%
Fail • 33%
Deferred • 34%
Russia confirmed as orchestrator • 25%
Russia suspected but not confirmed • 25%
Other actor identified • 25%
No conclusive outcome • 25%
Positive feedback • 25%
Negative feedback • 25%
Neutral feedback • 25%
No feedback • 25%
Minor changes • 25%
Major changes • 25%
No changes • 25%
Model suspension • 25%
No • 50%
Yes • 50%
Minor issues found • 25%
Critical issues found • 25%
No significant issues found • 25%
Moderate issues found • 25%