Loading...
Loading...
Browse all stories on DeepNewz
VisitNIST to Test OpenAI, Anthropic Models for Safety, Including Doomsday Scenarios
Aug 29, 2024, 08:21 PM
The National Institute of Standards and Technology (NIST) has signed research and testing agreements with AI companies OpenAI and Anthropic. This collaboration will allow the U.S. government to access and test the companies' latest AI models for safety and security flaws, including doomsday scenarios, before their public release. The agreements establish a framework for the U.S. AI Safety Institute to conduct pre-release testing and collaborative research, focusing on evaluating the capabilities and safety risks of these AI models. This initiative is seen as a critical step towards transparency and responsible innovation in AI development, with the U.S. aiming to maintain its leadership in the field.
View original story
Bias in decision-making • 25%
Security vulnerability • 25%
Privacy issue • 25%
Other safety flaw • 25%
Meta's Llama 3.1-70B • 25%
OpenAI's GPT-4 • 25%
Google's Bard • 25%
Other • 25%
OpenAI's O1 model • 25%
GPT-4 • 25%
Gemini • 25%
Anthropic's Claude • 25%
Apple • 20%
OpenAI • 20%
Amazon • 20%
Alphabet • 20%
Microsoft • 20%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
New Regulations Introduced • 25%
Fines Imposed • 25%
No Action Taken • 25%
Other • 25%
Fairness metric • 25%
Robustness metric • 25%
Transparency metric • 25%
Other metric • 25%
Google's Gemini • 25%
OpenAI's GPT • 25%
Microsoft's Azure AI • 25%
Other • 25%
OpenAI • 50%
Anthropic • 50%
Claude 3.5 Sonnet • 33%
GPT-4o • 33%
Google's AI Model • 33%
No • 50%
Yes • 50%
No significant issues found • 25%
Critical issues found • 25%
Moderate issues found • 25%
Minor issues found • 25%