Loading...
Loading...
Browse all stories on DeepNewz
VisitWill OpenAI's o1 'Strawberry' model be banned or restricted by a major government by end of 2024?
Yes • 50%
No • 50%
Official government announcements or news reports
OpenAI Unveils o1 'Strawberry' Model with Advanced Reasoning, 120 IQ
Sep 17, 2024, 05:03 AM
OpenAI has unveiled its latest AI model, o1, also known as 'Strawberry,' which boasts advanced reasoning capabilities. The o1 model is designed to handle complex tasks such as software programming, STEM applications, legal reasoning, disease diagnosis, and scientific research. It has demonstrated impressive performance, scoring 120 on the Norway Mensa IQ test, which is higher than the average human IQ. The model can follow multi-step logic rules and has been evaluated to excel in complex reasoning tasks, outperforming previous models like GPT-4o. Notably, it achieved an 83% score in the International Mathematics Olympiad qualifying exam, showcasing its prowess in complex problem-solving through chain-of-thought reasoning. However, the introduction of this advanced model has raised concerns about potential misuse, including risks related to chemical, biological, radiological, and nuclear (CBRN) weapons. OpenAI has acknowledged these risks and emphasized the need for accountability and safety measures. Additionally, OpenAI has increased rate limits for the o1-preview and o1-mini models, making them more accessible to users, including free ChatGPT users.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
United States • 25%
China • 25%
United Kingdom • 25%
Other • 25%
US • 25%
EU • 25%
China • 25%
Other • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Bioweapons • 25%
Financial Fraud • 25%
Data Breach • 25%
Other • 25%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
Software programming • 25%
Disease diagnosis • 25%
Legal reasoning • 25%
STEM applications • 25%
Economic impact • 25%
Data privacy • 25%
Ethical use • 25%
Security risks • 25%