Loading...
Loading...
Browse all stories on DeepNewz
VisitNIST to Test OpenAI, Anthropic Models for Safety, Including Doomsday Scenarios
Aug 29, 2024, 08:21 PM
The National Institute of Standards and Technology (NIST) has signed research and testing agreements with AI companies OpenAI and Anthropic. This collaboration will allow the U.S. government to access and test the companies' latest AI models for safety and security flaws, including doomsday scenarios, before their public release. The agreements establish a framework for the U.S. AI Safety Institute to conduct pre-release testing and collaborative research, focusing on evaluating the capabilities and safety risks of these AI models. This initiative is seen as a critical step towards transparency and responsible innovation in AI development, with the U.S. aiming to maintain its leadership in the field.
View original story
Markets
No • 50%
Yes • 50%
Official announcement from NIST or Anthropic
No • 50%
Yes • 50%
Official announcement from NIST or OpenAI
No • 50%
Yes • 50%
Official NIST website or press release
No significant issues found • 25%
Critical issues found • 25%
Moderate issues found • 25%
Minor issues found • 25%
Official announcement from NIST or Anthropic
Minor issues found • 25%
Critical issues found • 25%
No significant issues found • 25%
Moderate issues found • 25%
Official announcement from NIST or OpenAI
Both equally • 34%
OpenAI's model • 33%
Anthropic's model • 33%
Official announcement from NIST