Loading...
Loading...
Browse all stories on DeepNewz
VisitNIST to Test OpenAI, Anthropic Models for Safety, Including Doomsday Scenarios
Aug 29, 2024, 08:21 PM
The National Institute of Standards and Technology (NIST) has signed research and testing agreements with AI companies OpenAI and Anthropic. This collaboration will allow the U.S. government to access and test the companies' latest AI models for safety and security flaws, including doomsday scenarios, before their public release. The agreements establish a framework for the U.S. AI Safety Institute to conduct pre-release testing and collaborative research, focusing on evaluating the capabilities and safety risks of these AI models. This initiative is seen as a critical step towards transparency and responsible innovation in AI development, with the U.S. aiming to maintain its leadership in the field.
View original story
Pass • 33%
Fail • 33%
Deferred • 34%
New regulations introduced • 25%
OpenAI voluntarily updates protocols • 25%
No significant changes • 25%
Other outcomes • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Positive • 25%
Neutral • 25%
Negative • 25%
Mixed • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No wrongdoing found • 25%
Minor procedural lapses • 25%
Significant safety protocol breaches • 25%
Other • 25%
Yes • 50%
No • 50%
Identified and arrested the hackers • 25%
Identified but not arrested the hackers • 25%
Failed to identify the hackers • 25%
No official statement made • 25%
No • 50%
Yes • 50%
No significant issues found • 25%
Critical issues found • 25%
Moderate issues found • 25%
Minor issues found • 25%