Loading...
Loading...
Browse all stories on DeepNewz
VisitNIST to Test OpenAI, Anthropic Models for Safety, Including Doomsday Scenarios
Aug 29, 2024, 08:21 PM
The National Institute of Standards and Technology (NIST) has signed research and testing agreements with AI companies OpenAI and Anthropic. This collaboration will allow the U.S. government to access and test the companies' latest AI models for safety and security flaws, including doomsday scenarios, before their public release. The agreements establish a framework for the U.S. AI Safety Institute to conduct pre-release testing and collaborative research, focusing on evaluating the capabilities and safety risks of these AI models. This initiative is seen as a critical step towards transparency and responsible innovation in AI development, with the U.S. aiming to maintain its leadership in the field.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes, and it will lead to new safety measures • 25%
Yes, but no new safety measures announced • 25%
No, review not completed • 25%
No, review completed but no announcement made • 25%
No • 50%
Yes • 50%
No significant issues found • 25%
Critical issues found • 25%
Moderate issues found • 25%
Minor issues found • 25%
Minor issues found • 25%
Critical issues found • 25%
No significant issues found • 25%
Moderate issues found • 25%