Loading...
Loading...
Browse all stories on DeepNewz
VisitWhat will be the outcome of AIRTAG's initial report on multimodal AI in bioscience by March 2025?
No significant risks identified • 25%
Moderate risks identified • 25%
High risks identified • 25%
Inconclusive results • 25%
Official report by AIRTAG
OpenAI Partners with Los Alamos to Study Safe Use of Multimodal AI Models in Bioscience Research
Jul 10, 2024, 03:00 PM
OpenAI has announced a partnership with Los Alamos National Laboratory to study the safe use of AI in bioscience research. This collaboration aims to evaluate how advanced multimodal AI models, including GPT-4o with Voice Mode enabled, can assist scientists in laboratory settings. The initiative is in line with a White House executive order and focuses on ensuring that AI tools are used safely and effectively in real-world lab environments. Los Alamos has also established the AI Risks and Threat Assessments Group (AIRTAG) to further this mission. This partnership marks a significant step in integrating AI with bioscientific research while addressing potential risks and biosecurity concerns.
View original story
Pentagon • 25%
CIA • 25%
NSA • 25%
DHS • 25%
Yes • 50%
No • 50%
New international treaty • 25%
Joint research initiatives • 25%
No significant outcome • 25%
Other collaborative efforts • 25%
Bias reduction • 25%
Security enhancements • 25%
Privacy protections • 25%
General safety improvements • 25%
Pass • 33%
Fail • 33%
Deferred • 34%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
New Regulations Introduced • 25%
Fines Imposed • 25%
No Action Taken • 25%
Other • 25%
Less than 96% • 25%
96% to 97% • 25%
97% to 98% • 25%
More than 98% • 25%
New guidelines issued • 25%
Existing regulations upheld • 25%
Regulations relaxed • 25%
Other • 25%
Introduction of new legislation • 25%
Strengthening of existing guidelines • 25%
No significant changes • 25%
Other outcomes • 25%
Yes • 50%
No • 50%
Collaboration with other institutions • 25%
Expansion of AI in bioscience • 25%
Public outreach and education • 25%
New AI safety protocols • 25%